essay

Article: is uploaded

1). Psychological Purpose

Don't use plagiarized sources. Get Your Custom Essay on
essay
Just from $13/Page
Order Essay

This paper serves several purposes, the first of which is helping you gain insight into research papers in psychology. As this may be your first time reading and writing papers in psychology, one goal of Paper I is to give you insight into what goes into such papers. This article critique paper will help you learn about the various sections of an empirical research report by reading at least one peer-reviewed articles (articles that have a Title Page, Abstract*, Literature Review, Methods Section, Results Section, and References Page—I have already selected some articles for you to critique, so make sure you only critique one in the folder provided on Canvas) This paper will also give you some insights into how the results sections are written in APA formatted research articles. Pay close attention to those sections, as throughout this course you’ll be writing up some results of your own! 

In this relatively short paper, you will read one of five articles posted on Canvas and summarize what the authors did and what they found. The first part of the paper should focus on summarizing the design the authors used for their project. That is, you will identify the independent and dependent variables, talk about how the authors carried out their study, and then summarize the results (you don’t need to fully understand the statistics in the results, but try to get a sense of what the authors did in their analyses). In the second part of the paper, you will critique the article for its methodological strengths and weaknesses. Finally, in part three, you will provide your references for the Article Critique Paper in APA format.  

2). APA Formatting Purpose

The second purpose of the Article Critique paper is to teach you proper American Psychological Association (APA) formatting. In the instructions below, I tell you how to format your paper using APA style. There are a lot of very specific requirements in APA papers, so pay attention to the instructions below as well as the APA style powerpoint on Canvas. We are using the 7th edition of the APA style manual. 

3). Writing Purpose

Finally, this paper is intended to help you grow as a writer. Few psychology classes give you the chance to write papers and receive feedback on your work. This class will! We will give you feedback on this paper in terms of content, spelling, and grammar. 

Article Critique Paper (60 points possible)

Each student is required to write an article critique paper based on one of the research articles present on Canvas only those articles listed on Canvas can be critiqued – if you critique a different article, it will not be graded). If you are unclear about any of this information, please ask.

What is an article critique paper?

An article critique is a written communication that conveys your understanding of a research article and how it relates to the conceptual issues of interest to this course.  

This article critique paper will include 5 things:

  1. 1. Title page: 1 page (4 points)
  • Use APA style to present the appropriate information: 

    A Running head must be included and formatted APA style 

    The running head is a short title of your creation (no more than 50 characters) that is in ALL CAPS. This running head is left-justified (flush left on the page). Look at the first page of these instructions, and you will see how to set up your running head. 
    There must be a page number on the title page that is right justified. It is in the header on the title page and all subsequent pages.  

    Your paper title appears on the title page. This is usually 12 words or less, and the first letter of each word is capitalized. It should be descriptive of the paper (For this paper, you should use the title of the article you are critiquing. The paper title can be the same title as in the Running head or it can differ – your choice). The title should be bolded. 
    Your name will appear on the title page, include 2 double spaced lines between the title and your name (see the title page here). Your name and institutional affiliation (the name of your university) should not be bold. 
    Your institution will appear on the title page as well
    For all papers, make sure to double-space EVERYTHING and use Times New Roman font. This includes everything from the title page through the references. 
    This is standard APA format. ALL of your future papers will include a similar title page

  1. 2. Summary of the Article: 1 ½ page minimum, 3 pages maximum – 14 points)

An article critique should briefly summarize, in your own words, the article research question and how it was addressed in the article. Below are some things to include in your summary. 

  • The summary itself will include the following: (Note – if the article involved more than one experiment, you can either choose to focus on one of the studies specifically or summarize the general design for all of the studies)
  1. Type of study (Was it experimental or correlational? How do you know?)
  2. Variables (What were the independent and dependent variables? How did they manipulate the IV? How did they operationally define the DV? Be specific with these. Define the terms independent and dependent variable and make sure to identify how they are operationally defined in the article)
  3. Method (What did the participants do in the study? How was it set up? Was there a random sample of participants? Was there random assignment to groups?). How was data collected (online, in person, in a laboratory?). 
  4. Summary of findings (What were their findings?)
  5. 3. Critique of the study: 1 ½ pages minimum – 3 pages maximum – 16 points)
  • This portion of the article critique assignment focuses on your own thoughts about the content of the article (i.e. your own ideas in your own words). For this section, please use the word “Critique” below the last sentence in your summary, and have the word “Critique” flush left. 
  • This section is a bit harder, but there are a number of ways to demonstrate critical thinking in your writing. Address at least four of the following elements. You can address more than four, but four is the minimum.  
  • 1). In your opinion, are there any confounding variables in the study (these could be extraneous variables or nuisance variables)? If so, explain what the confound is and specifically how it is impacting the results of the study. A sufficient explanation of this will include at least one paragraph of writing.
  • 2). Is the sample used in the study an appropriate sample? Is the sample representative of the population? Could the study be replicated if it were done again? Why or why not? 
  • 3). Did they measure the dependent variable in a way that is valid? Be sure to explain what validity is, and why you believe the dependent variable was or was not measured in a way that was valid. 
  • 4). Did the study authors correctly interpret their findings, or are there any alternative interpretations you can think of?
  • 5). Did the authors of the study employ appropriate ethical safeguards?
  • 6). Briefly describe a follow-up study you might design that builds on the findings of the study you read how the research presented in the article relates to research, articles or material covered in other sections of the course
  • 7). Describe whether you feel the results presented in the article are weaker or stronger than the authors claim (and why); or discuss alternative interpretations of the results (i.e. something not mentioned by the authors) and/or what research might provide a test between the proposed and alternate interpretations
  • 8). Mention additional implications of the findings not mentioned in the article (either theoretical or practical/applied)
  • 9). Identify specific problems in the theory, discussion or empirical research presented in the article and how these problems could be corrected. If the problems you discuss are methodological in nature, then they must be issues that are substantial enough to affect the interpretations of the findings or arguments presented in the article. Furthermore, for methodological problems, you must justify not only why something is problematic but also how it could be resolved and why your proposed solution would be preferable.
  • 10). Describe how/why the method used in the article is either better or worse for addressing a particular issue than other methods 
  1. Brief summary of the article: One or paragraphs (6 points)
  • Write the words “Brief Summary”, and then begin the brief summary below this
  • In ONE or TWO paragraphs maximum, summarize the article again, but this time I want it to be very short. In other words, take all of the information that you talked about in the summary portion of this assignment and write it again, but this time in only a few sentences. 
  • The reason for this section is that I want to make sure you can understand the whole study but that you can also write about it in a shorter paragraph that still emphasizes the main points of the article. Pretend that you are writing your own literature review for a research study, and you need to get the gist of an article that you read that helps support your own research across to your reader. Make sure to cite the original study (the article you are critiquing). 
  1. References – 1 page (4 points)
  • Provide the reference for this article in proper APA format (see the book Chapter 14 for appropriate referencing guidelines or the Chapter 14 powerpoint). 
  • If you cited other sources during either your critique or summary, reference them as well (though you do not need to cite other sources in this assignment – this is merely optional IF you happen to bring in other sources). Formatting counts here, so make sure to italicize where appropriate and watch which words you are capitalizing!
  1. Grammar and Writing Quality (6 points)

    Few psychology courses are as writing intensive as Research Methods (especially Research Methods Two next semester!). As such, I want to make sure that you develop writing skills early. This is something that needs special attention, so make sure to proofread your papers carefully. 
    Avoid run-on sentences, sentence fragments, spelling errors, and grammar errors. Writing quality will become more important in future papers, but this is where you should start to hone your writing skills. 
    We will give you feedback on your papers, but I recommend seeking some help from the FIU writing center to make sure your paper is clear, precise, and covers all needed material. I also recommend asking a few of your group members to read over your paper and make suggestions. You can do the same for them! 
    If your paper lacks originality and contains too much overlap with the paper you are summarizing (i.e. you do not paraphrase appropriately or cite your sources properly), you will lose some or all of the points from writing quality, depending on the extent of the overlap with the paper. For example, if sentences contain only one or two words changed from a sentence in the original paper, you will lose points from writing quality. 

Please note that you do not need to refer to any other sources other than the article on which you have chosen to write your paper. However, you are welcome to refer to additional sources if you choose. 

  1. Self-Rating Rubric (10 points). On canvas, you will find a self-rating rubric. This rubric contains a summary of all the points available to you in this paper. You must submit your ratings for your own paper, using this rubric (essentially, you’ll grade your own paper before you hand it in). You will upload your completed rubric to the “article critique rubric” assignment on Canvas. 

    Please put effort into your ratings. Do not simply give yourself a 50/50. Really reflect on the quality of your paper and whether you meet all the criteria listed. 

    If it is clear that you have not reflected sufficiently on your paper (e.g., you give a rating of 2/2 for something that is not included in your paper), you will lose points. 

    This does not mean that you are guaranteed whatever grade you give to yourself. Instead, this will help you to 1) make sure that you have included everything you need to include, and 2) help you to reflect on your own writing. 
    In fact, we will use this very same rubric when we grade your paper, so you should know exactly what to expect for your grade! 

Other guidelines for the article critique papers

  • 1). Pay attention to the page length requirements – 1 page for the title page, 1.5 pages to 3 pages for the summary, 1.5 pages to 3 pages for the critique, one or two paragraphs for the brief summary, and 1 page for the references page. If you are under the minimum, we will deduct points. If you go over the maximum, we are a little more flexible (you can go over by half page or so), but we want you to try to keep it to the maximum page. 
  • 2). Page size is 8 1/2 X 11” with all 4 margins set one inch on all sides. You must use 12-point Times New Roman font. 
  • 3). As a general rule, ALL paragraphs and sentences are double spaced in APA papers. It even includes the references, so make sure to double space EVERYTHING
  • 4). When summarizing the article in your own words, you need not continually cite the article throughout the rest of your critique. Nonetheless, you should follow proper referencing procedures, which means that: 

    If you are inserting a direct quote from any source, it must be enclosed in quotations and followed by a parenthetical reference to the source. “Let’s say I am directly quoting this current sentence and the next. I would then cite it with the author name, date of publication, and the page number for the direct quote” (Winter, 2013, p . 4). 

    Note: We will deduct points if you quote more than once per page, so keep quotes to a minimum. Paraphrase instead, but make sure you still give the original author credit for the material by citing him or using the author’s name (“In this article, Smith noted that …” or “In this article, the authors noted that…”)

    If you choose to reference any source other than your chosen article, it must be listed in a reference list.  

  • 5). Proofread everything you write. I actually recommend reading some sentences aloud to see if they flow well, or getting family or friends to read your work. Writing quality will become more important in future papers, so you should start working on that now!

If you have any questions about the articles, your ideas, or your writing, please ask. Although we won’t be able to review entire drafts of papers before they are handed in, we are very willing to discuss problems, concerns or issues that you might have.

https://doi.org/10.1177/0956797620939054

Psychological Science
2020, Vol. 31(7) 770 –78

0

© The Author(s) 2020

Article reuse guidelines:
sagepub.com/journals-permissions
DOI: 10.1177/0956797620939054
www.psychologicalscience.org/PS

ASSOCIATION FOR
PSYCHOLOGICAL SCIENCEResearch Article

We’re not just fighting an epidemic; we’re fighting
an infodemic.

—Tedros Adhanom Ghebreyesus (2020),
Director-General of the World Health Organization

The COVID-19 pandemic represents a substantial chal-
lenge to global human well-being. Not unlike other chal-
lenges (e.g., global warming), the impact of the COVID-19
pandemic depends on the actions of individual citizens
and, therefore, the quality of the information to which
people are exposed. Unfortunately, however, misinfor-
mation about COVID-19 has proliferated, including on
social media (Frenkel, Alba, & Zhong, 2020; Russonello,
2020).

In the case of COVID-19, this misinformation comes
in many forms—from conspiracy theories about the virus
being created as a biological weapon in China to claims

that coconut oil kills the virus. At its worst, misinforma-
tion of this sort may cause people to turn to ineffective
(and potentially harmful) remedies, as well as to either
overreact (e.g., by hoarding goods) or, more danger-
ously, underreact (e.g., by engaging in risky behavior
and inadvertently spreading the virus). As a conse-
quence, it is important to understand why people believe
and share false (and true) information related to COVID-
19—and to develop interventions to increase the quality
of information that people share online.

Here, we applied a cognitive-science lens to the
problem of COVID-19 misinformation. In particular, we

939054PSSXXX10.1177/0956797620939054Pennycook et al.Combating COVID-19 Misinformation
research-article2020

Corresponding Author:
Gordon Pennycook, University of Regina, Hill and Levene Schools of
Business, 3737 Wascana Parkway, Regina, Saskatchewan, Canada, S4S 0A2
E-mail: gordon.pennycook@uregina.ca

Fighting COVID-19 Misinformation on
Social Media: Experimental Evidence for
a Scalable Accuracy-Nudge Intervention

Gordon Pennycook1,2,3 , Jonathon McPhetres1,2,4 , Yunhao Zhang4,
Jackson G. Lu4 , and David G. Rand4,5,6
1Paul J. Hill School of Business, University of Regina; 2Kenneth Levene Graduate School of Business, University
of Regina; 3Department of Psychology, University of Regina; 4Sloan School of Management, Massachusetts
Institute of Technology; 5Institute for Data, Systems, and Society, Massachusetts Institute of Technology; and
6Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology

Abstract
Across two studies with more than 1,700 U.S. adults recruited online, we present evidence that people share false
claims about COVID-19 partly because they simply fail to think sufficiently about whether or not the content is
accurate when deciding what to share. In Study 1, participants were far worse at discerning between true and false
content when deciding what they would share on social media relative to when they were asked directly about
accuracy. Furthermore, greater cognitive reflection and science knowledge were associated with stronger discernment.
In Study 2, we found that a simple accuracy reminder at the beginning of the study (i.e., judging the accuracy of a non-
COVID-19-related headline) nearly tripled the level of truth discernment in participants’ subsequent sharing intentions.
Our results, which mirror those found previously for political fake news, suggest that nudging people to think about
accuracy is a simple way to improve choices about what to share on social media.

Keywords
social media, decision making, policy making, reflectiveness, social cognition, open data, open materials, preregistered

Received 4/15/20; Revision accepted 6/8/20

TC

https://us.sagepub.com/en-us/journals-permissions

http://www.psychologicalscience.org/ps

http://crossmark.crossref.org/dialog/?doi=10.1177%2F0956797620939054&domain=pdf&date_stamp=2020-06-30

Combating COVID-19 Misinformation 771

tested whether previous findings from the domain of
political fake news (fabricated news stories presented
as if from legitimate sources; Lazer et al., 2018) extended
to misinformation related to COVID-19. We did so by
drawing on a recently proposed inattention-based
account of misinformation sharing on social media
(Pennycook et  al., 2020). According to this account,
people generally wish to avoid spreading misinforma-
tion and, in fact, are often able to tell truth from false-
hood; however, they nonetheless share false and
misleading content because the social media context
focuses their attention on factors other than accuracy
(e.g., partisan alignment). As a result, they get dis-
tracted from even considering accuracy when deciding
whether to share news—leading them to not implement
their preference for accuracy and instead share mislead-
ing content. In support of this inattention-based account,
recent findings (Pennycook et al., 2020) showed that most
participants were surprisingly good at discerning between
true and false political news when asked to assess “the
accuracy of headlines”—yet headline veracity had very
little impact on participants’ willingness to share the head-
lines on social media. Accordingly, subtle nudges that made
the concept of accuracy salient increased the veracity of
subsequently shared political content—both in survey
experiments and in a large field experiment on Twitter.

It was unclear, however, how (or whether) these
results would generalize to COVID-19. First, it may be
that a greater level of specialized knowledge is required
to correctly judge the accuracy of health information
relative to political information. Thus, participants may
be unable to discern truth from falsehood in the context
of COVID-19, even when they do consider accuracy.
Second, it was unclear whether participants would be
distracted from accuracy in the way that Pennycook et
al. (2020) observed for political headlines. A great deal
of evidence suggests that people are motivated to seek
out, believe, and share politically congenial information
(Kahan, Peters, Dawson, & Slovic, 2017; Kunda, 1990;
Lee, Shin, & Hong, 2018; Mercier & Sperber, 2011; Shin
& Thorson, 2017). Thus, it seems likely that these par-
tisan motivations are what distracted participants from
accuracy in the study by Pennycook et al. (2020), who
used highly political stimuli. If so, we would not expect
similar results for COVID-19. Much of the COVID-19
information (and misinformation) circulating online is
apolitical (e.g., that COVID-19 can be cured by Vitamin
C). Furthermore, despite some outliers, there was (at
the time these studies were run) relatively little partisan
disagreement regarding the seriousness of the pandemic
(Galston, 2020). Indeed, as described below, there were
no partisan differences in likelihood to believe true or
false COVID-19 headlines in our data. Thus, if partisan-
ship were the key distractor, people should not be

distracted from accuracy when deciding whether to
share COVID-19-related content. On the contrary, one
might reasonably expect the life-and-death context of
COVID-19 to particularly focus attention on accuracy.

In the current research, we therefore investigated the
role that inattention plays in the sharing of COVID-
19-related content. Study 1 tested for a dissociation
between accuracy judgments and sharing intentions
when participants evaluated a set of true and false news
headlines about COVID-19. Study 1 also tested for cor-
relational evidence of inattention by evaluating the
relationship between truth discernment and analytic
cognitive style (as well as examining science knowl-
edge, partisanship, geographic proximity to COVID-19
diagnoses, and the tendency to overuse vs. underuse
medical services). Study 2 experimentally tested
whether subtly making the concept of accuracy salient
increased the quality of COVID-19 information that
people were willing to share online.

Study 1

Method

We report how we determined our sample size, all data
exclusions, all manipulations, and all measures in this

Statement of Relevance

Misinformation can amplify humanity’s chal leng-
es. A salient example is the COVID-19 pandemic.
The environment created by the pandemic has bred
a multitude of falsehoods even as truth has become
a matter of life and death. In this research, we in-
ves ti gated why people believe and spread false
(and true) news content about COVID-19. We
found that people often fail to consider the accuracy
of content when deciding what to share and that
people who are more intuitive or less knowledgeable
about science are more likely to believe and share
falsehoods. We also tested an intervention to increase
the truthfulness of the content shared on social
media. Simply prompting people to think about
the accuracy of an unrelated headline improved
subsequent choices about what COVID-19 news
to share. Accuracy nudges are straightforward for
social media platforms to implement on top of the
other approaches they are currently employing.
With further optimization, interventions focused on
increasing the salience of accuracy on social media
could have a positive impact on countering the tide
of misinformation.

772 Pennycook et al.

study. Our data, materials, and preregistration are avail-
able on the Open Science Framework (https://osf
.io/7d3xh/). At the end of both surveys, we informed
participants which of the headlines were accurate (by
re-presenting the true headlines).

Participants. This study was run on March 12, 2020.
We recruited 1,000 participants using Lucid, an online
recruiting source that aggregates survey respondents
from many respondent providers (Coppock & Mcclellan,
2019). Lucid uses quota sampling to provide a sample
that is matched to the U.S. public on age, gender, ethnic-
ity, and geographic region. We selected Lucid because it
provides a sample that is reasonably representative of the
U.S. population while being affordable for large samples.
Our sample size was based on the following factors: (a)
1,000 is a large sample size for this design, (b) it was
within our budget, and (c) it is similar to what was used in
past research (Pennycook et al., 2020). In total, 1,143 par-
ticipants began the study. However, 192 did not indicate
using Facebook or Twitter and therefore did not complete
the survey. A further 98 participants did not finish the
study and were removed. The final sample consisted of
853 participants (mean age = 46 years, age range = 18–
90; 357 men, 482 women, and 14 who responded “other/
prefer not to answer”).

Materials and procedure.
News-evaluation and news-sharing tasks. Through

a partnership with Harvard Global Health Institute, we
acquired a list of 15 false and 15 true news headlines
relating to COVID-19 (available at https://osf.io/7d3xh/).
The false headlines were deemed to be false by authori-
tative sources (e.g., fact-checking sites such as snopes
.com and factcheck.org, health experts such as mayo
clinic.com, and credible science websites such as www
.livescience.com). After the study was completed, we
realized that one of the false headlines (about bats being
the source of the virus) was more misleading or unveri-
fied than untrue—however, removing this headline did
not change our results, and so we retained it. The true
headlines came from reliable mainstream media sources.

Headlines were presented in the format of Facebook
posts: a picture accompanied by a headline and lede
sentence. Each participant was randomly assigned to
one of two conditions. In the accuracy condition, they
were asked, “To the best of your knowledge, is the
claim in the above headline accurate?” (yes/no). In the
sharing condition, they were asked, “Would you con-
sider sharing this story online (for example, through
Facebook or Twitter?)” (yes/no); the validity of this
self-report sharing measure is evidenced by the obser-
vation that news headlines that Mechanical Turk par-
ticipants report a higher likelihood of sharing indeed
receive more shares on Twitter (Mosleh, Pennycook, &

Rand, 2020). We counterbalanced the order of the yes/
no options (no/yes vs. yes/no) across participants.
Headlines were presented in a random order.

A key outcome from the news task is truth discernment—
that is, the extent to which individuals distinguish
between true and false content in their judgments
(Pennycook & Rand, 2019b). Discernment is defined as
the difference in accuracy judgments (or sharing inten-
tions) between true and false headlines. For example,
an individual who shared 9 out of 15 true headlines and
12 out of 15 false headlines would have a discernment
level of −.2 (i.e., .6 – .8), whereas an individual who
shared 9 out of 15 true headlines and 3 out of 15 false
headlines would have a discernment level of .4 (i.e.,
.6 – .2). Thus, a higher discernment score indicates a
higher sensitivity to truth relative to falsity.

COVID-19 questions. Prior to the news-evaluation
task, participants were asked two questions specific to the
COVID-19 pandemic. First, they were asked, “How con-
cerned are you about COVID-19 (the new coronavirus)?”
which they answered using a sliding scale from 0 (not
concerned at all) to 100 (extremely concerned). Second,
they were asked “How often do you proactively check the
news regarding COVID-19 (the new coronavirus)?” which
they answered on a scale from 1 (never) to 5 (very often).

Additional correlates. We gave participants a six-item
Cognitive Reflection Test (CRT; Frederick, 2005) that con-
sisted of a reworded version of the original three-item test
and three items from a nonnumeric version (we excluded
the “hole” item; Thomson & Oppenheimer, 2016). The
CRT is a measure of one’s propensity to reflect on intu-
itions (Pennycook, Cheyne, Koehler, & Fugelsang, 2016;
Toplak, West, & Stanovich, 2011) and has strong test-
retest reliability (Stagnaro, Pennycook, & Rand, 2018). All
of the CRT items are constructed to elicit an intuitive
but incorrect response. Consider, for example, the fol-
lowing problem: If you are running a race and pass the
person in second place, what place are you in? For many
people, the intuitive response of “first place” pops into
mind—however, this is incorrect (if you pass the person
in second place, you overtake their position and are now
in second place yourself). Thus, correctly answering CRT
problems is associated with reflective thinking. The CRT
had acceptable reliability (Cronbach’s α = .69).

Participants also completed a general science-knowledge
quiz—as a measure of general background knowledge
for scientific issues—that consisted of 17 questions
about basic science facts (e.g., “Antibiotics kill viruses
as well as bacteria,” “Lasers work by focusing sound
waves”; McPhetres & Pennycook, 2020). The scale had
acceptable reliability (Cronbach’s α = .77).

We also administered the Medical Maximizer-Minimizer
Scale (MMS; Scherer et al., 2016), which measures the

https://osf.io/7d3xh/

https://osf.io/7d3xh/

https://osf.io/7d3xh/

http://www.mayoclinic.com

http://www.mayoclinic.com

http://www.livescience.com

http://www.livescience.com

Combating COVID-19 Misinformation 773

extent to which people are either “medical maximizers”
who tend to seek health care even for minor issues or,
rather, “medical minimizers” who tend to avoid health
care unless absolutely necessary. The MMS also had
acceptable reliability (Cronbach’s α = .86).

Finally, in addition to various demographic ques-
tions, we measured political ideology on both social
and fiscal issues, as well as Democrat versus Republican
Party alignment.

Attention checks. Following the recommendations of
Berinsky, Margolis, and Sances (2014), we added three
screener questions that put a subtle instruction in the
middle of a block of text. For example, in a block of
text ostensibly about which news sources people pre-
fer, we asked participants to select two specific options
(“FoxNews.com” and “NBC.com”) to check whether they
were reading the text. Full text for the screener questions,
along with the full materials for the study, are available
at https://osf.io/7d3xh/. Screener questions were placed
just prior to the news-evaluation and news-sharing tasks,
after the CRT, and after the science-knowledge scale and
MMS. To maintain the representativeness of our sample,
we followed our preregistered plan to include all partici-
pants in our main analyses, regardless of attentiveness.
As can be seen in Table S2 in the Supplemental Material
available online, our key result was robust (the effect size
for the interaction between content type and condition
remained consistent) across levels of attentiveness.

Analysis. We conducted all analyses of headline ratings
at the level of the rating, using linear regression with
robust standard errors clustered on participants and
headline.1 Ratings and all individual-differences mea-
sures were z scored; headline veracity was coded as –0.5
for false and 0.5 for true, and condition was coded as
−0.5 for accuracy and 0.5 for sharing. Our main analyses
used linear probability models instead of logistic regres-
sion because the coefficients are more readily interpre-
table. However, logistic regression yielded qualitatively
equivalent results. The coefficient on headline veracity
indicates overall level of discernment (the difference
between responses to true vs. false headlines), and the
interaction between condition and headline veracity indi-
cates the extent to which discernment differed between
the experimental conditions.

Results

Accuracy versus sharing. We began by comparing
discernment—the difference between responses to true
headlines and false headlines—across conditions. As pre-
dicted, we observed a significant interaction between
headline veracity and condition, β = −0.126, F(1, 25586) =
42.24, p < .0001, indicating that discernment was higher

for accuracy judgments than sharing intentions (Fig. 1;
similar results were obtained when we excluded the few
headlines that did not contain clear claims of fact or that
were political in nature; see Table S3 in the Supplemental
Material). In other words, veracity had a much bigger
impact on accuracy judgments, Cohen’s d = 0.657, 95%
confidence interval (CI) = [0.477, 0.836], F(1, 25586) =
42.24, p < .0001, than on sharing intentions, d = 0.121, 95% CI = [0.030, 0.212], F(1, 25586) = 6.74, p = .009. In particular, for false headlines, 32.4% more people were willing to share the headlines than rated them as accu- rate. In Study 2, we built on this observation to test the impact of experimentally inducing participants to think about accuracy when making sharing decisions.

Individual differences and truth discernment. Be –
fore turning to Study 2, we examined how various
individual-differences measures correlated with discern-
ment (i.e., how individual differences interacted with head-
line veracity). All relationships reported below were robust
to including controls for age, gender, education (college
degree or higher vs. less than college degree), ethnicity
(White vs. non-White), and all interactions among controls,
veracity, and condition.

Cognitive reflection. We found that scores on the CRT
were positively related to both accuracy discernment
and sharing discernment, as revealed by the interactions
between CRT score and veracity, F(1, 25582) = 34.95,
p < .0001, and F(1, 25582) = 4.98, p = .026, respectively. However, the relationship was significantly stronger for accuracy, as indicated by the three-way interaction among CRT score, veracity, and condition, F(1, 25582) = 14.68, p = .0001. In particular, CRT score was negatively correlated

0%

10%

20%

30%

40%

50%

60%

70%

80%

Accuracy Sharing

“Y
es


R

es
po

ns
es

Condition

True

False

Fig. 1. Results from Study 1: percentage of “yes” responses for
each combination of headline veracity (true vs. false) and condition
(accuracy = “To the best of your knowledge, is the claim in the above
headline accurate?” vs. sharing = “Would you consider sharing this
story online (for example, through Facebook or Twitter)?”). Error
bars indicate 95% confidence intervals.

https://osf.io/7d3xh/

774 Pennycook et al.

with belief in false headlines and uncorrelated with belief
in true headlines, whereas CRT score was negatively cor-
related with sharing of both types of headlines (albeit
more negatively correlated with sharing of false head-
lines compared with true headlines; for effect sizes, see
Table 1). The pattern of CRT correlations observed here
for COVID-19 misinformation is therefore consistent with
what has been seen previously with political headlines
(Pennycook & Rand, 2019b; Ross, Rand, & Pennycook,
2019).

Science knowledge. Like CRT score, science knowl-
edge was positively correlated with both accuracy dis-
cernment, F(1, 25552) = 32.80, p < .0001, and sharing discernment, F(1, 25552) = 10.02, p = .002, but much more so for accuracy, as revealed by the three-way interaction among science knowledge, veracity, and condition, F(1, 25552) = 7.59, p = .006. In particular, science knowledge was negatively correlated with belief in false headlines and positively correlated with belief in true headlines, whereas science knowledge was negatively correlated with sharing of false headlines and uncorrelated with sharing of true headlines (for effect sizes, see Table 1).

Exploratory measures. Distance from the nearest
COVID-19 epicenter (defined as a county with at least 10
con firmed coronavirus cases when the study was run;
log-transformed because of right skew) was not signifi-
cant ly related to belief in either true or false headlines but
was negatively correlated with sharing intentions for both
true and false headlines—no significant interactions with
veracity, p > .15; the interaction between distance and
condition was marginal, F(1, 25522) = 3.07, p = .080. MMS

score was negatively correlated with accuracy discern-
ment, F(1, 25582) = 11.26, p = .0008. Medical maximizers
showed greater belief in both true and false headlines
(this pattern was more strongly positive for belief in false
headlines); in contrast, there was no such correlation with
sharing discernment, F(1, 25582) = 0.03, p = .87. Thus,
medical maximizers were more likely to consider shar-
ing both true and false headlines to the same degree, as
revealed by the significant three-way interaction among
maximizer-minimizer, veracity, and condition, F(1, 25582) =
7.58, p = .006. Preference for the Republican Party over
the Democratic Party (partisanship) was not significantly
related to accuracy discernment, F(1, 25402) = 0.45, p =
.50, but was significantly negatively related to sharing
discernment, F(1, 25402) = 8.28, p = .004. Specifically,
stronger Republicans were less likely to share both true
and false headlines but were particularly less likely (rela-
tive to Democrats) to share true headlines—however, the
three-way interaction among partisanship, veracity, and
condition was not significant, F(1, 25402) = 1.62, p = .20.
For effect sizes, see Table 1.

Individual differences and COVID-19 attitudes.
Finally, in Table 2, we report an exploratory analysis of
how all of the above variables relate to concern about
COVID-19 and how often people proactively check
COVID-19-related news (self-reported). Both measures
were negatively correlated with CRT score and prefer-
ence for the Republican Party over the Democratic Party,
positively correlated with being a medical maximizer,
and unrelated to science knowledge when we used pair-
wise correlations but significantly positively related to
science knowledge in models with all covariates plus

Table 1. Standardized Regression Coefficients for Simple Effects of Each
Individual-Differences Measure Within Each Combination of Condition and
Headline Veracity (Study 1)

Variable

Accuracy condition Sharing condition

False
headlines

True
headlines

False
headlines
True
headlines

Cognitive Reflection
Test score

−0.148***
(−0.127***)

0.008
(0.006)

−0.177***
(−0.174***)

−0.134***
(−0.125***)

Science knowledge −0.080**
(−0.067*)

0.079**
(0.080**)

−0.082*
(−0.030*)

−0.011
(−0.007)

Preference for
Republican Party

0.003
(0.030)

−0.016
(−0.018)

−0.070*
(−0.012)

−0.128***
(−0.079*)

Distance to nearest
epicenter

−0.046†
(−0.005)

−0.021
(−0.028)

−0.099**
(−0.091**)

−0.099**
(−0.078*)

Medical Maximizer-
Minimizer Scale score

0.130***
(0.120***)

0.047*
(0.051*)

0.236***
(0.0207***)

0.233***
(0.200***)

Note: Values in parentheses show the results when controls are included for age, gender,
education (college degree or higher vs. less than college degree), and ethnicity (White vs.
non-White) and all interactions among controls, veracity, and condition.
†p < .1. *p < .05. **p < .01. ***p < .001.

Combating COVID-19 Misinformation 775

demographic controls. Distance to the nearest county
with at least 10 COVID-19 diagnoses was uncorrelated
with concern and negatively correlated with news check-
ing (although uncorrelated with news checking in the
model with all measures and controls).

Study

2

Study 1 established that people do not seem to readily
consider accuracy when deciding what to share on
social media. In Study 2, we tested an intervention in
which participants were subtly induced to consider
accuracy when making sharing decisions.

Method

Participants. This study was run from March 13 to
March 15, 2020. Following the same sample-size consid-
erations as in Study 1, we recruited 1,000 participants
using Lucid. In total, 1,145 participants began the study.
However, 177 did not indicate using Facebook or Twitter
and therefore did not complete the survey. A further 112
participants did not complete the study. The final sample
consisted of 856 participants (mean age = 47 years, age
range = 18–86; 385 men, 463 women, and 8 who
responded “other/prefer not to answer”).

Materials and procedure.
Accuracy induction. Each participant was randomly

assigned to one of two conditions. In the control con-
dition, they began the news-sharing task as in Study 1.
In the treatment condition, they rated the accuracy of a
single headline (unrelated to COVID-19) before beginning

the news-sharing task; following Pennycook et al. (2020),
we framed this as being for a pretest. Each participant
saw one of four possible headlines, all politically neutral
and unrelated to COVID-19 (see https://osf.io/7d3xh/ for
materials). An advantage of this design is that the manip-
ulation is subtle and not explicitly linked to the main
task. Thus, it is unlikely that any between-conditions
difference was driven by participants’ believing that the
accuracy question at the beginning of the treatment con-
dition was designed to make them take accuracy into
account when making sharing decisions during the main
experiment. It is therefore relatively unlikely that any
treatment effect was due to demand characteristics or
social desirability.

News-sharing task. Participants were shown the same
headlines as for Study 1 and (as in the sharing condition
of Study 1) were asked about their willingness to share
the headlines on social media. In this case, however, we
sought to increase the sensitivity of the measure by ask-
ing, “If you were to see the above on social media, how
likely would you be to share it?” which they answered on
a 6-point scale from 1 (extremely unlikely) to 6 (extremely
likely). As described above, some evidence in support
of the validity of this self-report sharing-intentions mea-
sure comes from Mosleh, Pennycook, and Rand (2020).
Further support for the specific paradigm used in this
study—in which participants are asked to rate the accu-
racy of a headline and then go on to indicate sharing
intentions—comes from Pennycook et al. (2020), who
found similar results using this paradigm on Mechanical
Turk and Lucid and in a field experiment on Twitter mea-
suring actual (rather than hypothetical) sharing.

Table 2. Pairwise Correlations Among Concern About COVID-19, Proactively Checking News About
COVID-19, and the Individual-Differences Measures (Study 1)

Variable
COVID-19
concern

COVID-19
news

checking
CRT
score

Science
knowledge

Partisanship
(Republican)

Distance
to nearest
epicenter

COVID-19 concern —
COVID-19 news checking .64*** —
Cognitive Reflection Test

(CRT) score
−.22***

(−0.17***)
−.10*

(−0.07*)

Science knowledge −.001
(0.10**)

.06
(0.10**)

.40*** —

Partisanship (Republican) −.27***
(−0.19***)

−.21***
(−0.15***)

.09** −.08* —

Distance to nearest
epicenter

−.05
(−0.02)

−.07*
(−0.04)

.01 −.03 .10* —

Medical maximizing .41***
(0.36***)

.36***
(0.34***)

−.23*** −.16*** −.15*** −.05

Note: Values in parentheses are standardized coefficients from linear regression models including all individual-differences
measures as well as age, gender, education (college degree or higher vs. less than college degree), and ethnicity (White vs. non-
White).
*p < .05. **p < .01. ***p < .001.

https://osf.io/7d3xh/

776 Pennycook et al.

Other measures. All of the additional measures included
in Study 1 were also included in Study 2.

Attention checks. The same screener questions included
in Study 1 were also included in Study 2. As in Study 1, to
maintain the sample’s representativeness, we present the
results for all participants in the main text and show the
robustness of our key result across levels of attentiveness
in the Supplemental Material (see Table S5).

Analysis. All analyses were conducted at the level of
the rating, using linear regression with robust standard
errors clustered on participants and headline. Sharing
intentions were rescaled such that 1 on the 6-point Likert
scale was 0, and 6 on the 6-point Likert scale was 1.

Results

As predicted, we observed a significant positive interac-
tion between headline veracity and treatment, β = 0.039,
F(1, 25623) = 17.88, p < .0001; the treatment increased sharing discernment (i.e., participants were more likely to share true headlines relative to false headlines after they rated the accuracy of a single non-COVID-related headline; Fig. 2). Specifically, although participants in the control condition were not significantly more likely to say that they would share true headlines compared with false headlines, d = 0.050, 95% CI = [−0.033, 0.133], F(1, 25623) = 1.41, p = .24, in the treatment condition,

sharing intentions for true headlines were significantly
higher than for false headlines, d = 0.142, 95% CI =
[0.049, 0.235], F(1, 25623) = 8.89, p = .003. Quantita-
tively, sharing discernment (the difference in sharing
likelihood of true relative to false headlines) was 2.8
times higher in the treatment condition compared with
the control condition. Furthermore, the treatment effect
on sharing discernment was not significantly moderated
by CRT performance, science knowledge, partisanship,
distance to the nearest epicenter, or MMS score (ps >.10
for all three-way interactions among headline veracity,
treatment, and individual-differences measure). The
treatment effect was also robust to excluding the few
headlines that did not contain clear claims of fact or
that were political in nature (see Table S6 in the Supple-
mental Material).

Our interpretation of the treatment effect is that the
accuracy nudge makes participants more likely to con-
sider accuracy when deciding whether to share. Given
this mechanism, the extent to which the treatment
increases or decreases sharing of a given headline
should reflect the underlying perceptions of the head-
line’s accuracy. That is, increasing an individual’s atten-
tion to accuracy should yield the largest changes in
sharing intentions for headlines that are more unilater-
ally perceived to be true or false. To provide evidence
for such a relationship, we performed a post hoc item-
level analysis. For each headline, we examined how
the effect of the treatment on sharing (i.e., average
sharing intention in the treatment condition minus aver-
age sharing intention in the control condition) varied
on the basis of the average accuracy rating given to that
headline by participants in the accuracy condition of
Study 1. Because participants in Study 2 did not rate
the accuracy of the COVID-19-related headlines, we
used average Study 1 ratings as a proxy for how accu-
rate participants in Study 2 would likely deem the head-
lines to be. As shown in Figure 3, there was indeed a

0%
10%
20%
30%
40%
50%
60%
70%

Control Treatment

Li
ke

lih
oo

d
of

S
ha

rin
g

Condition
True
False

Fig. 2. Results from Study 2: percentage of headlines participants
said they would be likely to share, separately for each combina-
tion of headline veracity (true vs. false) and condition (control vs.
treatment). For this visualization, we discretize sharing intentions
using the scale midpoint (i.e., 1–3 = 0, 4–6 = 1) to give a more eas-
ily interpretable measurement; all analyses are conducted using the
full (nondiscretized) scale, and plotting the average (nondiscretized)
sharing intentions looks qualitatively similar. For the equivalent plot
using mean sharing intentions instead of the discretized percentages,
see Figure S1 in the Supplemental Material available online. Error
bars indicate 95% confidence intervals.

0.04

0.02

0
0.02
0.04

0.06

0.08

0 0.2 0.4 0.6 0.8 1

Tr
ea

tm
en

t E
ff

ec
t i

n
St

ud
y

2

Perceived Accuracy in Study 1

True
False

Fig. 3. Relationship between the effect of the treatment in Study 2
and the average accuracy rating from participants in the accuracy
condition of Study 1 as a function of headline veracity (true vs. false).
The dashed line shows the best-fitting regression.

Combating COVID-19 Misinformation 777

strong positive correlation between a headline’s per-
ceived accuracy and the impact of the treatment, r(28) =
.76, p < .0001. Headlines that were more likely to be identified as true (on the basis of Study 1 data) were more strongly positively impacted (sharing increases) by nudging people to consider accuracy. This suggests that the accuracy nudge, as we hypothesized, increased people’s attention to whether the headlines seem true or not when they decided what to share.

Discussion

Our results are consistent with an inattention-based
account (Pennycook et  al., 2020) of COVID-19-
misinformation transmission on social media. In Study
1, participants were willing to share fake news about
COVID-19 that they would have apparently been able
to identify as being untrue if they were asked directly
about accuracy. Put differently, participants were far
less discerning if they were asked about whether they
would share a headline on social media than if they
were asked about its accuracy. Furthermore, individuals
who were more likely to rely on their intuitions and
who were lower in basic scientific knowledge were
worse at discerning between true and false content (in
terms of both accuracy and sharing decisions). In Study
2, we demonstrated the promise of a behavioral interven-
tion informed by this inattention-based account. Prior to
deciding which headlines they would share on social
media, participants were subtly primed to think about
accuracy by being asked to rate the accuracy of a single
non-COVID-related news headline. This minimal, content-
neutral intervention nearly tripled participants’ level of
discernment between sharing true and sharing false
headlines.

This research has important theoretical and practical
implications. Theoretically, our findings shed new light
on the perspective that inattention plays an important
role in the sharing of misinformation online. By dem-
onstrating the role of inattention in the context of
COVID-19 misinformation (rather than politics), our
results suggest that partisanship is not, apparently, the
key factor distracting people from considering accuracy
on social media. Instead, the tendency to be distracted
from accuracy on social media seems more general.
Thus, it seems likely that people are being distracted
from accuracy by more fundamental aspects of the
social media context. For example, social media plat-
forms provide immediate, quantified feedback on the
level of approval from one’s social connections (e.g.,
“likes” on Facebook). Thus, attention may by default
be focused on other factors, such as concerns about
social validation and reinforcement (e.g., Brady,

Crockett, & Van Bavel, 2020; Crockett, 2017) rather than
accuracy. Another possibility is that because news con-
tent is intermixed with content in which accuracy is not
relevant (e.g., baby photos, animal videos), people may
habituate to a lower level of accuracy consideration
when in the social media context. The finding that
people are inattentive to accuracy even when making
judgments about sharing content related to a global
pandemic raises important questions about the nature
of the social media ecosystem.

The present studies also add to the literature on
reasoning and truth discernment. While much of the
discussion around fake news has focused on political
ideology and partisan identity (Beck, 2017; Kahan,
2017; Taub, 2017; Van Bavel & Pereira, 2018), our data
are more consistent with recent studies on political
misinformation that provide both correlational
(Pennycook & Rand, 2019b; including data from Twitter
sharing, Mosleh, Pennycook, Arechar, & Rand, 2020)
and experimental (Bago, Rand, & Pennycook, 2020)
evidence for an important role of analytic cognitive
style. That is, our data suggest that an important con-
tributor to lack of truth discernment for health misin-
formation is the type of intuitive or emotional thinking
that has been associated with conspiratorial beliefs
(Swami, Voracek, Stieger, Tran, & Furnham, 2014; Vitriol
& Marsh, 2018) and superstition (Elk, 2013; Lindeman
& Svedholm, 2012; Risen, 2016). These findings high-
light the importance of reflecting on incorrect intuitions
and avoiding the traps of cognitive miserliness for a
variety of psychological outcomes and regardless of
political ideology (Pennycook, Fugelsang, & Koehler,
2015; Stanovich, 2005).

From a practical perspective, misinformation is a par-
ticularly significant problem in uncertain news environ-
ments (e.g., immediately following a major news event;
Starbird, 2019; Starbird, Maddock, Orand, Achterman, &
Mason, 2014). In cases where having high quality infor-
mation may literally be a matter of life and death—such
as for COVID-19—the need to develop interventions to
fight misinformation becomes even more crucial. Consis-
tent with recent work on political misinformation (Fazio,
2020; Pennycook et al., 2020), the present results show
that simple and subtle reminders about the concept of
accuracy may be sufficient to improve people’s sharing
decisions regarding information about COVID-19 and
therefore improve the accuracy of the information about
COVID-19 on social media. Although accuracy nudges
are far from a complete solution, the intervention may
nonetheless have important downstream effects on the
overall quality of information shared online (e.g., because
of network effects; see Pennycook et al., 2020). Further-
more, our treatment translates directly into a suite of

778 Pennycook et al.

real-world interventions that social media companies
could easily deploy by periodically asking users to rate
the accuracy of randomly sampled headlines. Such rat-
ings could also potentially help identify misinformation
via crowdsourcing (Pennycook & Rand, 2019a)—espe-
cially given that, at least for the 30 headlines considered
here, participants (on average) rated the true headlines
as much more accurate than the false headlines.

Our research has several limitations. First, our evi-
dence is restricted to the United States and therefore
needs to be tested elsewhere in the world. Next,
although our sample was quota matched to the U.S.
population on age, gender, ethnicity, and region, it was
not obtained via probability sampling and therefore
should not be considered truly nationally representa-
tive. We also used a particular set of true and false
headlines about COVID-19. It is important for future
work to test the generalizability of our findings to other
headlines and to information (and misinformation)
about COVID-19 that comes in forms other than head-
lines (e.g., e-mails, text posts, and memes about sup-
posed disease cures). Finally, our sharing intentions
were hypothetical, and our experimental accuracy
induction was performed in a “lab” context. Thus, one
may be concerned about whether our results will
extend to naturalistic social media contexts. As men-
tioned above, we see three reasons to expect that our
results will generalize to real sharing behavior. First,
there is evidence (at the level of the headline) that self-
reported sharing intentions correlate meaningfully with
actual sharing on social media platforms (Mosleh,
Pennycook, & Rand, 2020). Second, because our manip-
ulation was quite subtle, we believe it is unlikely that
differences in sharing intentions between the treatment
and control conditions (as opposed to overall sharing
levels) are driven by demand effects or social desir-
ability bias. Third, past research using similar methods
has shown evidence of external validity: Pennycook
et al. (2020) targeted the same accuracy-reminder inter-
vention at political misinformation and found that the
results from the survey experiments were replicated
when they delivered the intervention via direct message
on Twitter, significantly improving the quality of sub-
sequent tweets from individuals who are prone to shar-
ing misleading political news content.

Conclusion

Our results shed light on why people believe and share
misinformation related to COVID-19 and point to a
suite of interventions based on accuracy nudges that
social media platforms could directly implement. Such

interventions are easily scalable and do not require
platforms to make decision about what content to cen-
sor. We hope that social media platforms will consider
this approach in their efforts to improve the quality of
information shared online.

Transparency

Action Editor: Marc J. Buehner
Editor: Patricia J. Bauer
Author Contributions

G. Pennycook and D. G. Rand developed the study con-
cept. J. McPhetres created the survey. All authors designed
the study. G. Pennycook, J. McPhetres, and D. G. Rand
analyzed the data. All authors interpreted the data. G.
Pennycook and D. G. Rand drafted the manuscript, and
all authors provided critical revisions. All authors approved
the final version of the manuscript for submission.

Declaration of Conflicting Interests
The author(s) declared that there were no conflicts of
interest with respect to the authorship or the publication
of this article.

Funding
We gratefully acknowledge funding from the Ethics and
Governance of Artificial Intelligence Initiative of The Miami
Foundation, the William and Flora Hewlett Foundation, the
Omidyar Network, the John Templeton Foundation, the
Canadian Institute of Health Research, and the Social Sci-
ences and Humanities Research Council of Canada.

Open Practices
All data and materials have been made publicly available
via the Open Science Framework and can be accessed at
https://osf.io/7d3xh/. The design and analysis plans for
Studies 1 and 2 were preregistered at AsPredicted (copies
of the preregistration can be seen at https://osf.io/7d3xh/).
Deviations from the preregistration are noted in the text.
The complete Open Practices Disclosure for this article
can be found at http://journals.sagepub.com/doi/
suppl/10.1177/0956797620939054. This article has received
the badges for Open Data, Open Materials, and Preregis-
tration. More information about the Open Practices badges
can be found at http://www.psychologicalscience.org/
publications/badges.

TC

ORCID iDs

Gordon Pennycook https://orcid.org/0000-0003-1344-6143
Jonathon McPhetres https://orcid.org/0000-0002-6370-7789
Jackson G. Lu https://orcid.org/0000-0002-0144-9171

Acknowledgments

We thank Stefanie Friedhoff, Michael N. Stagnaro, and Daisy
Winner for assistance identifying true and false headlines,
and we thank Antonio A. Arechar for assistance executing
the studies.

https://osf.io/7d3xh/

https://osf.io/7d3xh/

http://journals.sagepub.com/doi/suppl/10.1177/0956797620939054

http://journals.sagepub.com/doi/suppl/10.1177/0956797620939054

Open Practice Badges

Open Practice Badges

https://orcid.org/0000-0003-1344-6143

https://orcid.org/0000-0002-6370-7789

https://orcid.org/0000-0002-0144-9171

Combating COVID-19 Misinformation 779

Supplemental Material

Additional supporting information can be found at http://
journals.sagepub.com/doi/suppl/10.1177/0956797620939054

Note

1. Our preregistration erroneously indicated that we would
cluster standard errors only on participant; doing so does not
qualitatively change the results.

References

Adhanom Ghebreyesus, T. (2020, February 15). Munich
Security Conference. World Health Organization.
Retrieved from https://www.who.int/dg/speeches/detail/
munich-security-conference

Bago, B., Rand, D. G., & Pennycook, G. (2020). Fake news,
fast and slow: Deliberation reduces belief in false (but not
true) news headlines. Journal of Experimental Psychology:
General. Advance online publication. doi:10.1037/xge
0000729

Beck, J. (2017, March 13). This article won’t change your
mind: The fact on why facts alone can’t fight false beliefs.
The Atlantic. Retrieved from https://www.theatlantic
.com/science/archive/2017/03/this-article-wont-change-
your-mind/519093/

Berinsky, A. J., Margolis, M. F., & Sances, M. W. (2014).
Separating the shirkers from the workers? Making sure
respondents pay attention on self-administered surveys.
American Journal of Political Science, 58, 739–753.
doi:10.1111/ajps.12081

Brady, W. J., Crockett, M. J., & Van Bavel, J. J. (2020). The
MAD model of moral contagion: The role of motivation,
attention, and design in the spread of moralized content
online. Perspectives on Psychological Science. Advance
online publication. doi:10.1177/1745691620917336

Coppock, A., & Mcclellan, O. A. (2019). Validating the demo-
graphic, political, psychological, and experimental results
obtained from a new source of online survey respondents.
Research & Politics, 6(1). doi:10.1177/2053168018822174

Crockett, M. J. (2017). Moral outrage in the digital age. Nature
Human Behaviour, 1, 769–771. doi:10.1038/s41562-017-0213-3

Elk, M. van. (2013). Paranormal believers are more prone to
illusory agency detection than skeptics. Consciousness and
Cognition, 22, 1041–1046. doi:10.1016/j.concog.2013.07.004

Fazio, L. (2020). Pausing to consider why a headline is
true or false can help reduce the sharing of false news.
Harvard Kennedy School Misinformation Review, 1(2).
doi:10.37016/mr-2020-009

Frederick, S. (2005). Cognitive reflection and decision mak-
ing. Journal of Economic Perspectives, 19(4), 25–42.
doi:10.1257/089533005775196732

Frenkel, S., Alba, D., & Zhong, R. (2020, March 8). Surge
of virus misinformation stumps Facebook and Twitter.
The New York Times. Retrieved from https://www.nytimes
.com/2020/03/08/technology/coronavirus-misinforma
tion-social-media.html

Galston, W. A. (2020, March 30). Polling shows Americans
see COVID-19 as a crisis, don’t think US is overreacting.
Brookings. Retrieved from https://www.brookings.edu/

blog/fixgov/2020/03/30/polling-shows-americans-see-
covid-19-as-a-crisis-dont-think-u-s-is-overreacting/

Kahan, D. M. (2017). Misconceptions, misinformation, and the
logic of identity-protective cognition. SSRN. doi:10.2139/
ssrn.2973067

Kahan, D. M., Peters, E., Dawson, E., & Slovic, P. (2017).
Motivated numeracy and enlightened self-government.
Behavioural Public Policy, 1, 54–86.

Kunda, Z. (1990). The case for motivated reasoning.
Psychological Bulletin, 108, 480–498. doi:10.1037/0033-
2909.108.3.480

Lazer, D. M. J., Baum, M. A., Benkler, Y., Berinsky, A. J.,
Greenhill, K. M., Menczer, F., . . . Zittrain, J. L. (2018).
The science of fake news. Science, 359, 1094–1096.
doi:10.1126/science.aao2998

Lee, C., Shin, J., & Hong, A. (2018). Does social media use
really make people politically polarized? Direct and indi-
rect effects of social media use on political polarization
in South Korea. Telematics and Informatics, 35, 245–254.
doi:10.1016/j.tele.2017.11.005

Lindeman, M., & Svedholm, A. M. (2012). What’s in a term?
Paranormal, superstitious, magical and supernatural
beliefs by any other name would mean the same. Review
of General Psychology, 16, 241–255. doi:10.1037/a0027158

McPhetres, J., & Pennycook, G. (2020). Science beliefs, politi-
cal ideology, and cognitive sophistication. OSF Preprints.
doi:10.31219/osf.io/ad9v7

Mercier, H., & Sperber, D. (2011). Why do humans reason?
Arguments for an argumentative theory. Behavioral & Brain
Sciences, 34, 57–74. doi:10.1017/S0140525X10000968

Mosleh, M., Pennycook, G., Arechar, A. A., & Rand, D. G.
(2020). Digital fingerprints of cognitive reflection.
PsyArXiv Preprints. doi:10.31234/osf.io/qaswn

Mosleh, M., Pennycook, G., & Rand, D. (2020). Self-reported
willingness to share political news articles in online sur-
veys correlates with actual sharing on Twitter. PLOS ONE,
15(2), Article e0228882. doi:10.1371/journal.pone.0228882

Pennycook, G., Cheyne, J. A., Koehler, D. J., & Fugelsang, J. A.
(2016). Is the Cognitive Reflection Test a measure of both
reflection and intuition? Behavior Research Methods, 48,
341–348. doi:10.3758/s13428-015-0576-1

Pennycook, G., Epstein, Z., Mosleh, M., Arechar, A. A., Eckles,
D., & Rand, D. G. (2020). Understanding and reducing
the spread of misinformation online. PsyArXiv Preprints.
doi:10.31234/osf.io/3n9u8

Pennycook, G., Fugelsang, J. A., & Koehler, D. J. (2015).
Everyday consequences of analytic thinking. Current
Directions in Psychological Science, 24, 425–432.
doi:10.1177/0963721415604610

Pennycook, G., & Rand, D. G. (2019a). Fighting misinfor-
mation on social media using crowdsourced judgments
of news source quality. Proceedings of the National
Academy of Sciences, USA, 116, 2521–2526. doi:10.1073/
pnas.1806781116

Pennycook, G., & Rand, D. G. (2019b). Lazy, not biased:
Susceptibility to partisan fake news is better explained by
lack of reasoning than by motivated reasoning. Cognition,
188, 39–50. doi:10.1016/j.cognition.2018.06.011

Risen, J. L. (2016). Believing what we do not believe:
Acquiescence to superstitious beliefs and other powerful

https://www.who.int/dg/speeches/detail/munich-security-conference

https://www.who.int/dg/speeches/detail/munich-security-conference

https://www.theatlantic
.com/science/archive/2017/03/this-article-wont-change-your-mind/519093/

https://www.theatlantic
.com/science/archive/2017/03/this-article-wont-change-your-mind/519093/

https://www.theatlantic
.com/science/archive/2017/03/this-article-wont-change-your-mind/519093/

https://www.brookings.edu/blog/fixgov/2020/03/30/polling-shows-americans-see-covid-19-as-a-crisis-dont-think-u-s-is-overreacting/

https://www.brookings.edu/blog/fixgov/2020/03/30/polling-shows-americans-see-covid-19-as-a-crisis-dont-think-u-s-is-overreacting/

https://www.brookings.edu/blog/fixgov/2020/03/30/polling-shows-americans-see-covid-19-as-a-crisis-dont-think-u-s-is-overreacting/

780 Pennycook et al.

intuitions. Psychological Review, 123, 182–207. doi:10.1037/
rev0000017

Ross, R. M., Rand, D. G., & Pennycook, G. (2019, November 13).
Beyond “fake news”: The role of analytic thinking in the
detection of inaccuracy and partisan bias in news headlines.
PsyArXiv. doi:10.31234/osf.io/cgsx6

Russonello, G. (2020, March 13). Afraid of coronavirus? That
might say something about your politics. The New York Times.
Retrieved from https://www.nytimes.com/2020/03/13/us/
politics/coronavirus-trump-polling.html

Scherer, L. D., Caverly, T. J., Burke, J., Zikmund-Fisher, B. J.,
Kullgren, J. T., Steinley, D., . . . Fagerlin, A. (2016).
Development of the Medical Maximizer-Minimizer Scale.
Health Psychology, 35, 1276–1287. doi:10.1037/hea0000417

Shin, J., & Thorson, K. (2017). Partisan selective sharing:
The biased diffusion of fact-checking messages on
social media. Journal of Communication, 67, 233–255.
doi:10.1111/jcom.12284

Stagnaro, M. N., Pennycook, G., & Rand, D. G. (2018).
Performance on the Cognitive Reflection Test is stable
across time. Judgment and Decision Making, 13, 260–267.

Stanovich, K. E. (2005). The robot’s rebellion: Finding mean-
ing in the age of Darwin. Chicago, IL: Chicago University
Press.

Starbird, K. (2019, July 25). Disinformation’s spread: Bots,
trolls and all of us. Nature, 571(7766), Article 449.
doi:10.1038/d41586-019-02235-x

Starbird, K., Maddock, J., Orand, M., Achterman, P., & Mason,
R. M. (2014). Rumors, false flags, and digital vigilantes:
Misinformation on Twitter after the 2013 Boston Marathon
bombing. In M. Kindling & E. Greifeneder (Eds.), iCon-
ference 2014 Proceedings (pp. 654–662). Grandville, MI:
iSchools. doi:10.9776/14308

Swami, V., Voracek, M., Stieger, S., Tran, U. S., & Furnham,
A. (2014). Analytic thinking reduces belief in conspiracy
theories. Cognition, 133, 572–585. doi:10.1016/j.cogni
tion.2014.08.006

Taub, A. (2017). The real story about fake news is partisan-
ship. The New York Times. Retrieved from https://www
.nytimes.com/2017/01/11/upshot/the-real-story-about-
fake-news-is-partisanship.html

Thomson, K. S., & Oppenheimer, D. M. (2016). Investigating
an alternate form of the Cognitive Reflection Test.
Judgment and Decision Making, 11, 99–113.

Toplak, M. E., West, R. F., & Stanovich, K. E. (2011). The
Cognitive Reflection Test as a predictor of performance
on heuristics-and-biases tasks. Memory & Cognition, 39,
1275–1289. doi:10.3758/s13421-011-0104-1

Van Bavel, J. J., & Pereira, A. (2018). The partisan brain: An
identity-based model of political belief. Trends in Cognitive
Sciences, 22, 213–224. doi:10.1016/j.tics.2018.01.004

Vitriol, J. A., & Marsh, J. K. (2018). The illusion of explanatory depth
and endorsement of conspiracy beliefs. European Journal of
Social Psychology, 48, 955–969. doi:10.1002/ejsp.2504

https://www.nytimes.com/2020/03/13/us/politics/coronavirus-trump-polling.html


What Will You Get?

We provide professional writing services to help you score straight A’s by submitting custom written assignments that mirror your guidelines.

Premium Quality

Get result-oriented writing and never worry about grades anymore. We follow the highest quality standards to make sure that you get perfect assignments.

Experienced Writers

Our writers have experience in dealing with papers of every educational level. You can surely rely on the expertise of our qualified professionals.

On-Time Delivery

Your deadline is our threshold for success and we take it very seriously. We make sure you receive your papers before your predefined time.

24/7 Customer Support

Someone from our customer support team is always here to respond to your questions. So, hit us up if you have got any ambiguity or concern.

Complete Confidentiality

Sit back and relax while we help you out with writing your papers. We have an ultimate policy for keeping your personal and order-related details a secret.

Authentic Sources

We assure you that your document will be thoroughly checked for plagiarism and grammatical errors as we use highly authentic and licit sources.

Moneyback Guarantee

Still reluctant about placing an order? Our 100% Moneyback Guarantee backs you up on rare occasions where you aren’t satisfied with the writing.

Order Tracking

You don’t have to wait for an update for hours; you can track the progress of your order any time you want. We share the status after each step.

image

Areas of Expertise

Although you can leverage our expertise for any writing task, we have a knack for creating flawless papers for the following document types.

Areas of Expertise

Although you can leverage our expertise for any writing task, we have a knack for creating flawless papers for the following document types.

image

Trusted Partner of 9650+ Students for Writing

From brainstorming your paper's outline to perfecting its grammar, we perform every step carefully to make your paper worthy of A grade.

Preferred Writer

Hire your preferred writer anytime. Simply specify if you want your preferred expert to write your paper and we’ll make that happen.

Grammar Check Report

Get an elaborate and authentic grammar check report with your work to have the grammar goodness sealed in your document.

One Page Summary

You can purchase this feature if you want our writers to sum up your paper in the form of a concise and well-articulated summary.

Plagiarism Report

You don’t have to worry about plagiarism anymore. Get a plagiarism report to certify the uniqueness of your work.

Free Features $66FREE

  • Most Qualified Writer $10FREE
  • Plagiarism Scan Report $10FREE
  • Unlimited Revisions $08FREE
  • Paper Formatting $05FREE
  • Cover Page $05FREE
  • Referencing & Bibliography $10FREE
  • Dedicated User Area $08FREE
  • 24/7 Order Tracking $05FREE
  • Periodic Email Alerts $05FREE
image

Our Services

Join us for the best experience while seeking writing assistance in your college life. A good grade is all you need to boost up your academic excellence and we are all about it.

  • On-time Delivery
  • 24/7 Order Tracking
  • Access to Authentic Sources
Academic Writing

We create perfect papers according to the guidelines.

Professional Editing

We seamlessly edit out errors from your papers.

Thorough Proofreading

We thoroughly read your final draft to identify errors.

image

Delegate Your Challenging Writing Tasks to Experienced Professionals

Work with ultimate peace of mind because we ensure that your academic work is our responsibility and your grades are a top concern for us!

Check Out Our Sample Work

Dedication. Quality. Commitment. Punctuality

Categories
All samples
Essay (any type)
Essay (any type)
The Value of a Nursing Degree
Undergrad. (yrs 3-4)
Nursing
2
View this sample

It May Not Be Much, but It’s Honest Work!

Here is what we have achieved so far. These numbers are evidence that we go the extra mile to make your college journey successful.

0+

Happy Clients

0+

Words Written This Week

0+

Ongoing Orders

0%

Customer Satisfaction Rate
image

Process as Fine as Brewed Coffee

We have the most intuitive and minimalistic process so that you can easily place an order. Just follow a few steps to unlock success.

See How We Helped 9000+ Students Achieve Success

image

We Analyze Your Problem and Offer Customized Writing

We understand your guidelines first before delivering any writing service. You can discuss your writing needs and we will have them evaluated by our dedicated team.

  • Clear elicitation of your requirements.
  • Customized writing as per your needs.

We Mirror Your Guidelines to Deliver Quality Services

We write your papers in a standardized way. We complete your work in such a way that it turns out to be a perfect description of your guidelines.

  • Proactive analysis of your writing.
  • Active communication to understand requirements.
image
image

We Handle Your Writing Tasks to Ensure Excellent Grades

We promise you excellent grades and academic excellence that you always longed for. Our writers stay in touch with you via email.

  • Thorough research and analysis for every order.
  • Deliverance of reliable writing service to improve your grades.
Place an Order Start Chat Now
image

Order your essay today and save 30% with the discount code Happy