Please read instructions in attachment and write essay.
Evidence-Based
Practice in Psychology
APA Presidential Task Force on Evidence-Based Practice
The evidence-based practice movement has become an
important feature of health care systems and health care
policy. Within this context, the APA 2005 Presidential Task
Force on Evidence-Based Practice defines and discusses
evidence-based practice in psychology (EBPP). In an in-
tegration of science and practice, the Task Force’s report
describes psychology’s fundamental commitment to sophis-
ticated EBPP and takes into account the full range of
evidence psychologists and policymakers must consider.
Research, clinical expertise, and patient characteristics are
all supported as relevant to good outcomes. EBPP pro-
motes effective psychological practice and enhances public
health by applying empirically supported principles of psy-
chological assessment, case formulation, therapeutic rela-
tionship, and intervention. The report provides a rationale
for and expanded discussion of the EBPP policy statement
that was developed by the Task Force and adopted as
association policy by the APA Council of Representatives
in August 2005.
Keywords: evidence-based practice; best available research
evidence; clinical expertise; patient characteristics, culture,
and preferences
From the very first conceptions of applied psychologyas articulated by Lightner Witmer, who formed thefirst psychological clinic in 1896 (McReynolds,
1997), psychologists have been deeply and uniquely asso-
ciated with an evidence-based approach to patient care. As
Witmer (1907/1996) pointed out, “the pure and the applied
sciences advance in a single front. What retards the
progress of one, retards the progress of the other; what
fosters one, fosters the other” (p. 249). As early as 1947,
the idea that doctoral psychologists should be trained as
both scientists and practitioners became American Psycho-
logical Association (APA) policy (Shakow et al., 1947).
Early practitioners such as Frederick C. Thorne (1947)
articulated the methods by which psychological practitio-
ners integrate science into their practice by “increasing
application of the experimental approach to the individual
case and to the clinician’s own ‘experience’” (p. 159).
Thus, psychologists have been on the forefront of the
development of evidence-based practice for decades.
Evidence-based practice in psychology is therefore
consistent with the past 20 years of work in evidence-based
medicine, which advocated for improved patient outcomes
by informing clinical practice with relevant research (Sox
& Woolf, 1993; Woolf & Atkins, 2001). Sackett, Rosen-
berg, Gray, Haynes, and Richardson (1996) described
evidence-based medicine as “the conscientious, explicit,
and judicious use of current best evidence in making deci-
sions about the care of individual patients” (pp. 71–72).
The use and misuse of evidence-based principles in the
practice of health care has affected the dissemination of
health care funds, but not always to the benefit of the
patient. Therefore, psychologists, whose training is
grounded in empirical methods, have an important role to
play in the continuing development of evidence-based
practice and its focus on improving patient care.
One approach to implementing evidence-based prac-
tice in health care systems has been through the develop-
ment of guidelines for best practice. During the early part
of the evidence-based practice movement, APA recognized
the importance of a comprehensive approach to the con-
ceptualization of guidelines. APA also recognized the risk
that guidelines might be used inappropriately by commer-
cial health care organizations not intimately familiar with
the scientific basis of practice to dictate specific forms of
treatment and restrict patient access to care. In 1992, APA
formed a joint task force of the Board of Scientific Affairs,
the Board of Professional Affairs, and the Committee for
The Task Force members were Carol D. Goodheart, EdD (Chair; Inde-
pendent Practice, Princeton, NJ); Ronald F. Levant, EdD (ex-officio;
University of Akron); David H. Barlow, PhD (Boston University); Jean
Carter, PhD (Independent Practice, Washington, DC); Karina W. David-
son, PhD (Columbia University); Kristofer J. Hagglund, PhD (University
of Missouri—Columbia); Steven D. Hollon, PhD (Vanderbilt University);
Josephine D. Johnson, PhD (Independent Practice, Livonia, MI); Laura C.
Leviton, PhD (Robert Wood Johnson Foundation, Princeton, NJ); Alvin
R. Mahrer, PhD (Emeritus, University of Ottawa); Frederick L. Newman,
PhD (Florida International University); John C. Norcross, PhD (Univer-
sity of Scranton); Doris K. Silverman, PhD (New York University); Brian
D. Smedley, PhD (The Opportunity Agenda, Washington, DC); Bruce E.
Wampold, PhD (University of Wisconsin); Drew I. Westen, PhD (Emory
University); Brian T. Yates, PhD (American University); Nolan W. Zane,
PhD (University of California, Davis). Professional American Psycholog-
ical Association (APA) staff included Geoffrey M. Reed, PhD, and Lynn
F. Bufka, PhD (Practice Directorate); Paul D. Nelson, PhD, and Cynthia
D. Belar, PhD (Education Directorate); and Merry Bullock, PhD (Science
Directorate).
The Task Force wishes to thank John Weisz, PhD, for his assistance
in drafting portions of the report related to children and adolescents;
James Mitchell and Omar Rehman, APA Professional Development in-
terns, for their assistance throughout the work of the Task Force; and
Ernestine Penniman for administrative support.
In August 2005, the APA Council of Representatives approved the
policy statement on evidence-based practice in psychology developed by
the Task Force and received a version of this report. The report contains
an expanded discussion of the issues raised in the policy statement,
including the rationale and references supporting it. The policy statement
is available online at http://www.apa.org/practice/ebpstatement and as
the Appendix of this article.
Correspondence concerning this article should be addressed to the
Practice Directorate, American Psychological Association, 750 First
Street NE, Washington, DC 20002-4242.
271May–June 2006 ● American Psychologist
Copyright 2006 by the American Psychological Association 0003-066X/06/$12.00
Vol. 61, No. 4, 271–285 DOI: 10.1037/0003-066X.61.4.271
the Advancement of Professional Practice. The document
developed by this task force—the Template for Developing
Guidelines: Interventions for Mental Disorders and Psy-
chosocial Aspects of Physical Disorders (hereinafter, Tem-
plate)—was approved by the APA Council of Representa-
tives in 1995 (American Psychological Association, 1995).
The Template described the variety of evidence that should
be considered in developing guidelines, and it cautioned
that any emerging clinical practice guidelines should be
based on careful systematic weighing of research data and
clinical expertise. The Template noted that
the successful construction of guidelines relies on the availability
of adequate scientific and clinical evidence concerning the inter-
vention being applied and the diagnostic condition being
treated. . . . Panels (should) weigh the available evidence accord-
ing to accepted standards of scientific merit, recognizing that the
warrant for conclusions differs widely for different bodies of data.
(p. 2)
Both the Template and the subsequent revised policy
document that replaced it—the Criteria for Evaluating
Treatment Guidelines (American Psychological Associa-
tion, 2002)—were quite specific in indicating that the ev-
idence base for any psychological intervention should be
evaluated in terms of two separate dimensions: efficacy and
clinical utility. The dimension of efficacy lays out criteria
for the evaluation of the strength of evidence pertaining to
establishing causal relationships between interventions and
disorders under treatment. The clinical utility dimension
includes a consideration of available research evidence and
clinical consensus regarding the generalizability, feasibility
(including patient acceptability), and costs and benefits of
interventions.
The Template was used to examine a selection of
available mental health treatment guidelines, and wide
variation was found in the quality of the guidelines’ cov-
erage of the relevant literature as well as the scientific and
clinical basis, specificity, and generalizability of their treat-
ment recommendations (Stricker et al., 1999). Even guide-
lines that were clearly designed to educate rather than to
legislate, were interdisciplinary in nature, and provided
extensive empirical and clinical information did not always
accurately translate the evidence reviewed into the algo-
rithms that determined the protocols for treatment under
particular sets of circumstances. Psychologists have been
particularly concerned about widely disseminated practice
guidelines that recommend the use of medications over
psychological interventions in the absence of data support-
ing such recommendations (Barlow, 1996; Beutler, 1998;
Muñoz, Hollon, McGrath, Rehm, & VandenBos, 1994;
Nathan, 1998).
The general benefits of psychotherapy had been es-
tablished by meta-analytic reviews during the 1970s (Smith
& Glass, 1977; Smith, Glass, & Miller, 1980). Neverthe-
less, a perception existed in many corners of the health
delivery system that psychological treatments for particular
disorders were either ineffective or inferior to pharmaco-
logical treatment. In 1995, the APA Division 12 (Clinical
Psychology) Task Force on Promotion and Dissemination
of Psychological Procedures, in an effort to promote treat-
ments delivered by psychologists, published criteria for
identifying empirically validated treatments (subsequently
relabeled empirically supported treatments) for particular
disorders (Chambless et al., 1996, 1998). This task force
identified 18 treatments whose empirical support they con-
sidered to be well established on the basis of criteria that
included having been tested in randomized controlled trials
(RCTs) with a specific population and implemented using
a treatment manual.
Although the goal was to identify treatments with
evidence for efficacy comparable to the evidence for the
efficacy of medications—and, hence, to highlight the con-
tribution of psychological treatments—the Division 12
Task Force report sparked a decade of both enthusiasm and
controversy. The report increased recognition of demon-
strably effective psychological treatments among the pub-
lic, policymakers, and training programs. At the same time,
many psychologists raised concerns about the exclusive
focus on brief, manualized treatments; the emphasis on
specific treatment effects as opposed to common factors
that account for much of the variance in outcomes across
disorders; and the applicability to a diverse range of pa-
tients varying in comorbidity, personality, race, ethnicity,
and culture.
In response, several groups of psychologists, includ-
ing other divisions of APA, offered additional frameworks
for integrating the available research evidence. In 1999,
APA Division 29 (Psychotherapy) established a task force
to identify, operationalize, and disseminate information on
empirically supported therapy relationships, given the pow-
erful association between outcome and aspects of the ther-
apeutic relationship such as the therapeutic alliance
(Norcross, 2001). APA Division 17 (Society of Counseling
Psychology) also undertook an examination of empirically
supported treatments in counseling psychology (Wampold,
Lichtenberg, & Waehler, 2002). The Society of Behavioral
Medicine, which is not a part of APA but has a significantly
overlapping membership, has recently published criteria
for examining the evidence base for behavioral medicine
interventions (Davidson, Trudeau, Ockene, Orleans, &
Kaplan, 2003). As of this writing, we are aware that task
forces have been appointed to examine related issues by a
large number of APA divisions concerned with practice
issues.
At the same time that these groups within psychology
have been grappling with how best to conceptualize and
examine the scientific basis for practice, the evidence-
based practice movement has become a key feature of
health care systems and health care policy. At the state
level, a number of initiatives encourage or mandate the use
of a specific list of mental health treatments within state
Medicaid programs (e.g., Carpinello, Rosenberg, Stone,
Schwager, & Felton, 2002; Chorpita et al., 2002; see also
Reed & Eisman, 2006; Tanenbaum, 2005). At the federal
level, a major joint initiative of the National Institute of
Mental Health and the Department of Health and Human
Service’s Substance Abuse and Mental Health Services
Administration focuses on promoting, implementing, and
272 May–June 2006 ● American Psychologist
evaluating evidence-based mental health practices within
state mental health systems (e.g., see National Institutes of
Health, 2004). The goals of evidence-based practice initi-
atives to improve quality and cost-effectiveness and to
enhance accountability are laudable and broadly supported
within psychology, although empirical evidence of system-
wide improvements following their implementation is still
limited. However, the psychological community—includ-
ing both scientists and practitioners—is concerned that
evidence-based practice initiatives not be misused as a
justification for inappropriately restricting access to care
and choice of treatments.
It was in this context that 2005 APA President Ronald
F. Levant appointed the APA Presidential Task Force on
Evidence-Based Practice (hereinafter, Task Force). The
Task Force included scientists and practitioners from a
wide range of perspectives and traditions, reflecting the
diverse perspectives within the field. In this report, the Task
Force hopes to draw on APA’s century-long tradition of
attention to the integration of science and practice by
creating a document that describes psychology’s funda-
mental commitment to sophisticated evidence-based psy-
chological practice and takes into account the full range of
evidence that policymakers must consider. We aspire to set
the stage for further development and refinement of evi-
dence-based practice for the betterment of psychological
aspects of health care as it is delivered around the world.1
Definition
On the basis of its review of the literature and its deliber-
ations, the Task Force agreed on the following definition:
Evidence-based practice in psychology (EBPP) is the inte-
gration of the best available research with clinical expertise
in the context of patient characteristics, culture, and
preferences.
This definition of EBPP closely parallels the definition
of evidence-based practice adopted by the Institute of Med-
icine (2001; as adapted from Sackett, Straus, Richardson,
Rosenberg, & Haynes, 2000): “Evidence-based practice is
the integration of best research evidence with clinical ex-
pertise and patient values” (p. 147). Psychology builds on
the Institute of Medicine definition by deepening the ex-
amination of clinical expertise and broadening the consid-
eration of patient characteristics. The purpose of EBPP is to
promote effective psychological practice and enhance pub-
lic health by applying empirically supported principles of
psychological assessment, case formulation, therapeutic re-
lationship, and intervention.
Psychological practice entails many types of interven-
tions, in multiple settings, for a wide variety of potential
patients. In this document, intervention refers to all direct
services rendered by health care psychologists, including
assessment, diagnosis, prevention, treatment, psychother-
apy, and consultation. As is the case with most discussions
of evidence-based practice, we focus on treatment. The
same general principles apply to psychological assessment,
which is essential to effective treatment. The settings in-
clude but are not limited to hospitals, clinics, independent
practices, schools, military installations, public health in-
stitutions, rehabilitation institutes, primary care centers,
counseling centers, and nursing homes.
To be consistent with discussions of evidence-based
practice in other areas of health care, we use the term
patient in this document to refer to the child, adolescent,
adult, older adult, couple, family, group, organization,
community, or other population receiving psychological
services. However, we recognize that in many situations
there are important and valid reasons for using such terms
as client, consumer, or person in place of patient to de-
scribe the recipient of services. Further, psychologists tar-
get a variety of problems, including but not restricted to
mental health, academic, vocational, relational, health,
community, and other problems in their professional
practices.
It is important to clarify the relation between EBPP
and empirically supported treatments (ESTs). EBPP is the
more comprehensive concept. ESTs start with a treatment
and ask whether it works for a certain disorder or problem
under specified circumstances. EBPP starts with the patient
and asks what research evidence (including relevant results
from RCTs) will assist the psychologist in achieving the
best outcome. In addition, ESTs are specific psychological
treatments that have been shown to be efficacious in con-
trolled clinical trials, whereas EBPP encompasses a
broader range of clinical activities (e.g., psychological as-
sessment, case formulation, therapy relationships). As
such, EBPP articulates a decision-making process for inte-
grating multiple streams of research evidence—including
but not limited to RCTs—into the intervention process.
The following sections explore in greater detail the
three major components of this definition— best available
research, clinical expertise, and patient characteristics—
and their integration.
Best Available Research Evidence
A sizable body of scientific evidence drawn from a variety
of research designs and methodologies attests to the effec-
1 The Task Force limited its consideration to evidence-based practice as it
relates to health services provided by psychologists. Therefore, other
organizational, community, or educational applications of evidence-based
practice by psychologists are outside the scope of this report. Further, the
Task Force was charged with defining and explicating principles of
evidence-based practice in psychology but not with developing practice
guidelines for individual psychologists or with other forms of
implementation.
In its first two meetings, through an iterative process of small
working groups and subsequent review and revision of all drafts by the
entire group, the Task Force achieved consensus in support of draft
versions of its two primary work products: a draft APA policy statement
and a draft report. The draft documents were circulated widely, with a
request for review and comment to the APA Council of Representatives,
boards and committees, divisions, and state and provincial psychological
associations. Notice of the documents’ availability for review and com-
ment by members was published in the APA Monitor on Psychology and
publicized on the front page of the APA Web site. A total of 199 sets of
comments were submitted by groups and by individual members. Each of
these comments was reviewed and discussed by the Task Force in a series
of conference calls. At its final meeting, the Task Force achieved consen-
sus on revised versions of the proposed APA policy statement and the
current report.
273May–June 2006 ● American Psychologist
tiveness of psychological practices. The research literature
on the effect of psychological interventions indicates that
these interventions are safe and effective for a large number
of children and youths (Kazdin & Weisz, 2003; Weisz,
Hawley, & Doss, 2004), adults (Barlow, 2004; Nathan &
Gorman, 2002; Roth & Fonagy, 2004; Wampold et al.,
1997), and older adults (Duffy, 1999; Zarit & Knight,
1996) across a wide range of psychological, addictive,
health, and relational problems. More recent research has
indicated that compared with alternative approaches, such
as medications, psychological treatments are particularly
enduring (Hollon, Stewart, & Strunk, 2006). Further, re-
search has demonstrated that psychotherapy can and often
does pay for itself in terms of medical-cost offset, increased
productivity, and life satisfaction (Chiles, Lambert, &
Hatch, 2002; Yates, 1994).
Psychologists possess distinctive strengths in design-
ing, conducting, and interpreting research studies that can
guide evidence-based practice. Moreover, psychology—as
a science and as a profession—is distinctive in combining
scientific commitment with an emphasis on human rela-
tionships and individual differences. As such, psychology
can help to develop, broaden, and improve the research
base for evidence-based practice.
There is broad consensus that psychological practice
needs to be based on evidence and that research needs to
balance internal and external validity. Research will not
always address all practice needs. Major issues in integrat-
ing research in day-to-day practice include (a) the relative
weight to place on different research methods; (b) the
representativeness of research samples; (c) whether re-
search results should guide practice at the level of princi-
ples of change, intervention strategies, or specific proto-
cols; (d) the generalizability and transportability of
treatments supported in controlled research to clinical prac-
tice settings; (e) the extent to which judgments can be made
about treatments of choice when the number and duration
of treatments tested has been limited; and (f) the degree to
which the results of efficacy and effectiveness research can
be generalized from primarily White samples to minority
and marginalized populations (see Westen, Novotny, &
Thompson-Brenner, 2004; see also contrasting position
papers in Norcross, Beutler, & Levant, 2005). Neverthe-
less, research on practice has made progress in investigat-
ing these issues and is providing evidence that is more
responsive to day-to-day practice. There is sufficient con-
sensus to move forward with the principles of EBPP.
Meta-analytic investigations since the 1970s have
shown that most therapeutic practices in widespread clini-
cal use are generally effective for treating a range of
problems (Lambert & Ogles, 2004; Wampold, 2001). In
fact, the effect sizes of psychological interventions for
children, adults, and older adults rival or exceed those of
widely accepted medical treatments (Barlow, 2004; Lipsey
& Wilson, 2001; Rosenthal, 1990; Weisz, Jensen, &
McLeod, 2005). It is important not to assume that inter-
ventions that have not yet been studied in controlled trials
are ineffective. Specific interventions that have not been
subjected to systematic empirical testing for specific prob-
lems cannot be assumed to be either effective or ineffec-
tive; they are simply untested to date. Nonetheless, good
practice and science call for the timely testing of psycho-
logical practices in a way that adequately operationalizes
them using appropriate scientific methodology. Widely
used psychological practices as well as innovations devel-
oped in the field or laboratory should be rigorously evalu-
ated, and barriers to conducting this research should be
identified and addressed.
Multiple Types of Research Evidence
Best research evidence refers to scientific results related to
intervention strategies, assessment, clinical problems, and
patient populations in laboratory and field settings as well
as to clinically relevant results of basic research in psy-
chology and related fields. APA endorses multiple types of
research evidence (e.g., efficacy, effectiveness, cost-effec-
tiveness, cost– benefit, epidemiological, treatment utiliza-
tion) that contribute to effective psychological practice.
Multiple research designs contribute to evidence-
based practice, and different research designs are better
suited to address different types of questions (Greenberg &
Newman, 1996):
● Clinical observation (including individual case
studies) and basic psychological science are valu-
able sources of innovations and hypotheses (the
context of scientific discovery).
● Qualitative research can be used to describe the
subjective, lived experiences of people, including
participants in psychotherapy.
● Systematic case studies are particularly useful when
aggregated—as in the form of practice research
networks—for comparing individual patients with
others with similar characteristics.
● Single-case experimental designs are particularly
useful for establishing causal relationships in the
context of an individual.
● Public health and ethnographic research are espe-
cially useful for tracking the availability, utilization,
and acceptance of mental health treatments as well
as suggesting ways of altering these treatments to
maximize their utility in a given social context.
● Process– outcome studies are especially valuable for
identifying mechanisms of change.
● Studies of interventions as these are delivered in
naturalistic settings (effectiveness research) are well
suited for assessing the ecological validity of
treatments.
● RCTs and their logical equivalents (efficacy re-
search) are the standard for drawing causal infer-
ences about the effects of interventions (context of
scientific verification).
● Meta-analysis is a systematic means to synthesize
results from multiple studies, test hypotheses, and
quantitatively estimate the size of effects.
With respect to evaluating research on specific interven-
tions, current APA policy identifies two widely accepted
274 May–June 2006 ● American Psychologist
dimensions. As stated in the Criteria for Evaluating Treat-
ment Guidelines (American Psychological Association,
2002),
The first dimension is treatment efficacy, the systematic and
scientific evaluation of whether a treatment works. The second
dimension is clinical utility, the applicability, feasibility, and
usefulness of the intervention in the local or specific setting where
it is to be offered. This dimension also includes determination of
the generalizability of an intervention whose efficacy has been
established. (p. 1053)
Types of research evidence with regard to intervention
research in ascending order as to their contribution to
conclusions about efficacy include “clinical opinion, ob-
servation, and consensus among recognized experts repre-
senting the range of use in the field” (Criterion 2.1); “sys-
tematized clinical observation” (Criterion 2.2); and
“sophisticated empirical methodologies, including quasi
experiments and randomized controlled experiments or
their logical equivalents” (Criterion 2.3; American Psycho-
logical Association, 2002, p. 1054). Among sophisticated
empirical methodologies, “randomized controlled experi-
ments represent a more stringent way to evaluate treatment
efficacy because they are the most effective way to rule out
threats to internal validity in a single experiment” (Amer-
ican Psychological Association, 2002, p. 1054).
Evidence on clinical utility is also crucial. Per estab-
lished APA policy (American Psychological Association,
2002), at a minimum this includes attention to generality of
effects across varying and diverse patients, therapists, set-
tings, and the interaction of these factors; the robustness of
treatments across various modes of delivery; the feasibility
with which treatments can be delivered to patients in real-
world settings; and the costs associated with treatments.
Evidence-based practice requires that psychologists rec-
ognize the strengths and limitations of evidence obtained from
different types of research. Research has shown that the treat-
ment method (Nathan & Gorman, 2002), the individual psy-
chologist (Wampold, 2001), the treatment relationship
(Norcross, 2002), and the patient (Bohart & Tallman, 1999)
are all vital contributors to the success of psychological prac-
tice. Comprehensive evidence-based practice will consider all
of these determinants and their optimal combinations. Psy-
chological practice is a complex relational and technical en-
terprise that requires clinical and research attention to multi-
ple, interacting sources of treatment effectiveness. There
remain many disorders, problem constellations, and clinical
situations for which empirical data are sparse. In such in-
stances, clinicians use their best clinical judgment and knowl-
edge of the best available research evidence to develop co-
herent treatment strategies. Researchers and practitioners
should join together to ensure that the research available on
psychological practice is both clinically relevant and inter-
nally valid.
Future Directions
EBPP has important implications for research programs
and funding priorities. These programs and priorities
should emphasize research on the following:
● psychological treatments of established efficacy in
combination with—and as an alternative to—phar-
macological treatments;
● the generalizability and transportability of interven-
tions shown to be efficacious in controlled research
settings;
● Patient � Treatment interactions (moderators);
● the efficacy and effectiveness of psychological prac-
tice with underrepresented groups, such as those char-
acterized by gender, gender identity, ethnicity, race,
social class, disability status, and sexual orientation;
● the efficacy and effectiveness of psychological
treatments with children and youths at different
developmental stages;
● the efficacy and effectiveness of psychological
treatments with older adults;
● distinguishing common and specific factors as
mechanisms of change;
● characteristics and actions of the psychologist and
the therapeutic relationship that contribute to posi-
tive outcomes;
● the effectiveness of widely practiced treatments—
based on various theoretical orientations and inte-
grative blends—that have not yet been subjected to
controlled research;
● the development of models of treatment based on
identification and observation of the practices of
clinicians in the community who empirically obtain
the most positive outcomes;
● criteria for discontinuing treatment;
● accessibility and utilization of psychological
services;
● the cost-effectiveness and costs– benefits of psycho-
logical interventions;
● development and testing of practice research
networks;
● the effects of feedback regarding treatment progress
to the psychologist or patient;
● development of profession-wide consensus, rooted in
the best available research evidence, on psychological
treatments that are considered discredited; and
● research on prevention of psychological disorders
and risk behaviors.
Clinical Expertise
Clinical expertise2 is essential for identifying and integrat-
ing the best research evidence with clinical data (e.g.,
information about the patient obtained over the course of
treatment) in the context of the patient’s characteristics and
preferences to deliver services that have the highest prob-
ability of achieving the goals of therapy. Psychologists are
trained as scientists as well as practitioners. An advantage
of psychological training is that it fosters a clinical exper-
2 As it is used in this report, clinical expertise refers to competence
attained by psychologists through education, training, and experience that
results in effective practice; the term is not meant to refer to extraordinary
performance that might characterize an elite group (e.g., the top 2%) of
clinicians.
275May–June 2006 ● American Psychologist
tise informed by scientific expertise, allowing the psychol-
ogist to understand and integrate scientific literature as well
as to frame and test hypotheses and interventions in prac-
tice as a “local clinical scientist” (Stricker & Trierweiler,
1995).
Cognitive scientists have found consistent evidence of
enduring and significant differences between experts and
novices undertaking complex tasks in several domains (Bé-
dard & Chi, 1992; Bransford, Brown, & Cocking, 1999;
Gambrill, 2005). Experts recognize meaningful patterns
and disregard irrelevant information, acquire extensive
knowledge and organize it in ways that reflect a deep
understanding of their domain, organize their knowledge
using functional rather than descriptive features, retrieve
knowledge relevant to the task at hand fluidly and auto-
matically, adapt to new situations, self-monitor their
knowledge and performance, know when their knowledge
is inadequate, continue to learn, and generally attain out-
comes commensurate with their expertise.
However, experts are not infallible. All humans are
prone to errors and biases. Some of these stem from cog-
nitive strategies and heuristics that are generally adaptive
and efficient. Others stem from emotional reactions, which
generally guide adaptive behavior as well but can also lead
to biased or motivated reasoning (e.g., Ditto & Lopez,
1992; Ditto, Munro, Apanovitch, Scepansky, & Lockhart,
2003; Kunda, 1990). Whenever psychologists involved in
research or practice move from observations to inferences
and generalizations, there are inherent risks of idiosyncratic
interpretations, overgeneralizations, confirmatory biases,
and similar errors in judgment (Dawes, Faust, & Meehl,
2002; Grove, Zald, Lebow, Snitz, & Nelson, 2000; Meehl,
1954; Westen & Weinberger, 2004). Integral to clinical
expertise is an awareness of the limits of one’s knowledge
and skills and attention to the heuristics and biases— both
cognitive and affective—that can affect clinical judgment.
Mechanisms such as consultation and systematic feedback
from the patient can mitigate some of these biases.
The individual therapist has a substantial impact on
outcomes, both in clinical trials and in practice settings
(Crits-Christoph et al., 1991; Huppert et al., 2001; Kim,
Wampold, & Bolt, in press; Wampold & Brown, 2005).
The fact that treatment outcomes are systematically related
to the provider of the treatment (above and beyond the type
of treatment) provides strong evidence for the importance
of understanding expertise in clinical practice as a way of
enhancing patient outcomes.
Components of Clinical Expertise
Clinical expertise encompasses a number of competencies
that promote positive therapeutic outcomes. These include
(a) assessment, diagnostic judgment, systematic case for-
mulation, and treatment planning; (b) clinical decision
making, treatment implementation, and monitoring of pa-
tient progress; (c) interpersonal expertise; (d) continual
self-reflection and acquisition of skills; (e) appropriate
evaluation and use of research evidence in both basic and
applied psychological science; (f) understanding the influ-
ence of individual and cultural differences on treatment; (g)
seeking available resources (e.g., consultation, adjunctive
or alternative services) as needed; and (h) having a cogent
rationale for clinical strategies. Expertise develops from
clinical and scientific training, theoretical understanding,
experience, self-reflection, knowledge of research, and
continuing professional education and training. It is mani-
fested in all clinical activities, including but not limited to
forming therapeutic alliances; assessing patients and devel-
oping systematic case formulations, planning treatment,
and setting goals; selecting interventions and applying
them skillfully; monitoring patient progress and adjusting
practices accordingly; attending to patients’ individual, so-
cial, and cultural contexts; and seeking available resources
as needed (e.g., consultation, adjunctive or alternative
services).
Assessment, diagnostic judgment, system-
atic case formulation, and treatment planning.
The clinically expert psychologist is able to formulate clear
and theoretically coherent case conceptualizations, assess
patient pathology as well as clinically relevant strengths,
understand complex patient presentations, and make accu-
rate diagnostic judgments. Expert clinicians revise their
case conceptualizations as treatment proceeds and seek
both confirming and disconfirming evidence. Clinical ex-
pertise also involves identifying and helping patients to
acknowledge psychological processes that contribute to
distress or dysfunction.
Treatment planning involves setting goals and tasks of
treatment that take into consideration the unique patient,
the nature of the patient’s problems and concerns, the likely
prognosis and expected benefits of treatment, and available
resources. The goals of therapy are developed in collabo-
ration with the patient and consider the patient and his or
her family’s worldview and sociocultural context. The
choice of treatment strategies requires knowledge of inter-
ventions and the research that supports their effectiveness
as well as research relevant to matching interventions to
patients (e.g., Beutler, Alomohamed, Moleiro, & Romanelli,
2002; Blatt, Shahar, & Zurhoff, 2002; Norcross, 2002).
Expertise also requires knowledge about psychopathology;
treatment process; and patient attitudes, values, and con-
text—including cultural context—that can affect the choice
and implementation of effective treatment strategies.
Clinical decision making, treatment imple-
mentation, and monitoring of patient progress.
Clinical expertise entails the skillful and flexible delivery
of treatment. Skill and flexibility require knowledge of and
proficiency in delivering psychological interventions and
the ability to adapt the treatment to the particular case.
Flexibility is manifested in tact, timing, pacing, and fram-
ing of interventions; maintaining an effective balance be-
tween consistency of interventions and responsiveness to
patient feedback; and attention to acknowledged and unac-
knowledged meanings, beliefs, and emotions.
Clinical expertise also entails the monitoring of pa-
tient progress (and of changes in the patient’s circum-
stances— e.g., job loss, major illness) that may suggest the
need to adjust the treatment (Lambert, Bergin, & Garfield,
2004). If progress is not proceeding adequately, the psy-
276 May–June 2006 ● American Psychologist
chologist alters or addresses problematic aspects of the
treatment (e.g., problems in the therapeutic relationship or
in the implementation of the goals of the treatment) as
appropriate. If insufficient progress remains a problem, the
therapist considers alternative diagnoses and formulations,
consultation, supervision, or referral. The clinical expert
makes decisions about termination in timely ways by as-
sessing patient progress in the context of the patient’s life,
treatment goals, resources, and relapse potential.
Interpersonal expertise. Central to clinical
expertise is interpersonal skill, which is manifested in
forming a therapeutic relationship, encoding and decoding
verbal and nonverbal responses, creating realistic but pos-
itive expectations, and responding empathically to the pa-
tient’s explicit and implicit experiences and concerns. In-
terpersonal expertise involves the flexibility to be clinically
effective with patients of diverse backgrounds. Interperson-
ally skilled psychologists are able to challenge patients in a
supportive atmosphere that fosters exploration, openness,
and change.
Psychological practice is, at root, an interpersonal
relationship between psychologist and patient. Each partic-
ipant in the treatment relationship exerts influence on its
process and outcome, and the compatibility of psychologist
and patient(s) is particularly important. Converging sources
of evidence indicate that individual health care profession-
als affect the efficacy of treatment (American Psychologi-
cal Association, 2002). In psychotherapy, for example,
individual-therapist effects (within treatment) account for
5%– 8% of the outcome variance (Crits-Christoph et al.,
1991; Kim et al., in press; Project MATCH Research
Group, 1998; Wampold & Brown, 2005). Decades of re-
search also support the contribution of an active and mo-
tivated patient to successful treatment (e.g., Bohart & Tall-
man, 1999; Clarkin & Levy, 2004; W. R. Miller &
Rollnick, 2002; Prochaska, Norcross, & DiClemente, 1994).
With the development of interactive electronic tech-
nology (e.g., telehealth), many community-wide psycho-
logical interventions or other approaches do not necessarily
involve direct, face-to-face contact with a psychologist.
However, these interventions, to be effective, also engage
the patient actively in the treatment process and attend in a
flexible manner to individual variations among targeted
groups.
The clinical expert fosters the patient’s positive en-
gagement in the therapeutic process, monitors the thera-
peutic alliance, and attends carefully to barriers to engage-
ment and change. The clinical expert recognizes barriers to
progress and addresses them in a way that is consistent with
theory and research (e.g., exploring therapeutic impasses
with the patient, addressing problems in the therapeutic
relationship).
Continual self-reflection and acquisition of
skills. Clinical expertise requires the ability to reflect on
one’s own experience, knowledge, hypotheses, inferences,
emotional reactions, and behaviors and to use that reflec-
tion to modify one’s practices accordingly. Integral to
clinical expertise is an awareness of the limits of one’s
knowledge and skills as well as a recognition of the heu-
ristics and biases (both cognitive and affective) that can
affect clinical judgment (e.g., biases that can inhibit recog-
nition of the need to alter case conceptualizations that are
inaccurate or treatment strategies that are not working).
Clinical expertise involves taking explicit action to limit
the effects of these biases.
Developing and maintaining clinical expertise and
applying this expertise to specific patients entail the con-
tinual incorporation of new knowledge and skills derived
from (a) research and theory; (b) systematic clinical obser-
vation, disciplined inquiry, and hypothesis testing; (c) self-
reflection and feedback from other sources (e.g., supervi-
sors, peers, patients, other health professionals, the
patient’s significant others [where appropriate]); (d) mon-
itoring of patient outcomes; and (e) continuing education
and other learning opportunities (e.g., practice networks,
patient advocacy groups).
Evaluation and use of research evidence.
Clinical expertise in psychology includes scientific exper-
tise. This is one of the hallmarks of psychological educa-
tion and one of the advantages of psychological training.
An understanding of scientific method allows psycholo-
gists to consider evidence from a range of research designs,
evaluate the internal and external validity of individual
studies, evaluate the magnitude of effects across studies,
and apply relevant research to individual cases. Clinical
expertise also comprises a scientific attitude toward clinical
work, characterized by openness to data, clinical hypothe-
sis generation and testing, and a capacity to use theory to
guide interventions without allowing theoretical precon-
ceptions to override clinical or research data.
Understanding the influence of individual,
cultural, and contextual differences on treat-
ment. Clinical expertise requires an awareness of the
individual, social, and cultural context of the patient, in-
cluding but not limited to age and development, ethnicity,
culture, race, gender, sexual orientation, religious commit-
ments, and socioeconomic status (see the Patient Charac-
teristics, Culture, and Preferences section). Clinical exper-
tise allows psychologists to adapt interventions and
construct a therapeutic milieu that respects the patient’s
worldview, values, preferences, capacities, and other char-
acteristics (Arnkoff, Glass, & Shapiro, 2002; Sue & Lam,
2002). APA has adopted practice guidelines on multicul-
tural practice, sexual orientation, and older adults to assist
psychologists in tailoring their practices to patient differ-
ences (American Psychological Association, 2000, 2003,
2004).
Seeking available resources as needed
(e.g., consultation, adjunctive or alternative
services). The psychologist is cognizant that accessing
additional resources can sometimes enhance the effective-
ness of treatment. When research evidence indicates the
value of adjunctive services or when patients are not mak-
ing progress as expected, the psychologist may seek con-
sultation or make a referral. Culturally sensitive alternative
services responsive to a patient’s context or worldview may
complement psychological treatment. Consultation for the
277May–June 2006 ● American Psychologist
psychologist is a means to monitor—and correct, if neces-
sary— cognitive and affective biases.
A cogent rationale for clinical strategies.
Clinical expertise requires a planful approach to the treat-
ment of psychological problems. Although clinical practice
is often eclectic or integrative (Norcross & Goldfried,
2005), and many effects of psychological treatment reflect
nonspecific aspects of therapeutic engagement (e.g.,
changes that occur through development of an empathic
relationship; Lambert et al., 2004; Weinberger, 1995), psy-
chologists rely on well-articulated case formulations,
knowledge of relevant research, and the organization pro-
vided by theoretical conceptualizations and clinical expe-
rience to craft interventions designed to attain desired
outcomes.
Some patients have a well-defined issue or disorder
for which there is a body of evidence that strongly supports
the effectiveness of a particular treatment. This evidence
should be considered in formulating a treatment plan, and
a cogent rationale should be articulated for any course of
treatment recommended. There are many problem constel-
lations, patient populations, and clinical situations for
which treatment evidence is sparse. In such instances,
evidence-based practice consists of using clinical expertise
in interpreting and applying the best available evidence
while carefully monitoring patient progress and modifying
treatment as appropriate (Hayes, Barlow, & Nelson-Gray,
1999; Lambert, Harmon, Slade, Whipple, & Hawkins,
2005; S. D. Miller, Duncan, & Hubble, 2005).
Future Directions
Although much less research is available on clinical exper-
tise than on psychological interventions, an important foun-
dation is emerging (Goodheart, 2006; Skovholt & Jennings,
2004; Westen & Weinberger, 2004). For example, research
on case formulation and diagnosis suggests that clinical
inferences, diagnostic judgments, and formulations can be
reliable and valid when structured in ways that maximize
clinical expertise (Eells, Lombart, Kendjelic, Turner, &
Lucas, 2005; Persons, 1991; Westen & Weinberger, 2005).
Research suggests that sensitivity and flexibility in the
administration of therapeutic interventions produces better
outcomes than rigid application of manuals or principles
(Castonguay, Goldfried, Wiser, Raue, & Hayes, 1996;
Henry, Schacht, Strupp, Butler, & Binder, 1993; Huppert et
al., 2001). Reviews of research on biases and heuristics in
clinical judgment have suggested procedures that clinicians
might use to minimize those biases (Garb, 1998). Because
of the importance of therapeutic alliance to outcome (Hor-
vath & Bedi, 2002; Martin, Garske, & Davis, 2000; Shirk
& Karver, 2003), an understanding of the personal at-
tributes and interventions of therapists that strengthen the
alliance is essential for maximizing the quality of patient
care (Ackerman & Hilsenroth, 2003).
Mutually respectful collaboration between researchers
and expert practitioners will foster useful and systematic
empirical investigation of clinical expertise. Some of the
most pressing research needs are the following:
● studying the practices of clinicians who obtain the
best outcomes in the community, both in general
and with particular kinds of patients or problems;
● identifying technical skills used by expert clinicians
in the administration of psychological interventions
that have proven to be effective;
● improving the reliability, validity, and clinical util-
ity of diagnoses and case formulations;
● studying conditions that maximize clinical expertise
(rather than focusing primarily on limits to clinical
expertise);
● determining the extent to which errors and biases
widely studied in the literature are linked to decre-
ments in treatment outcome and how to modify or
correct those errors;
● developing well-normed measures that clinicians
can use to quantify their diagnostic judgments, mea-
sure therapeutic progress over time, and assess the
therapeutic process;
● distinguishing expertise related to common factors
shared across most therapies and expertise specific
to particular treatment approaches; and
● providing clinicians with real-time patient feedback
to benchmark progress in treatment and clinical
support tools to adjust treatment as needed.
Patient Characteristics, Culture, and
Preferences
Normative data on “what works for whom” (Nathan &
Gorman, 2002; Roth & Fonagy, 2004) provide essential
guides to effective practice. Nevertheless, psychological
services are most likely to be effective when they are
responsive to the patient’s specific problems, strengths,
personality, sociocultural context, and preferences
(Norcross, 2002). Psychology’s long history of studying
individual differences and developmental change, and its
growing empirical literature related to human diversity
(including culture3 and psychotherapy), place it in a strong
position to identify effective ways of integrating research
and clinical expertise with an understanding of patient
characteristics essential to EBPP (Hall, 2001; Sue, Zane, &
Young, 1994). EBPP involves consideration of the pa-
tient’s values, religious beliefs, worldviews, goals, and
preferences for treatment with the psychologist’s experi-
ence and understanding of the available research.
Several questions frame current debates about the role
of patient characteristics in EBPP. The first regards the
extent to which cross-diagnostic patient characteristics,
such as personality traits or constellations, moderate the
impact of empirically tested interventions. A second, re-
3 Culture, in this context, is understood to encompass a broad array of
phenomena (e.g., shared values, history, knowledge, rituals, customs) that
often result in a shared sense of identity. Racial and ethnic groups may
have a shared culture, but those personal characteristics are not the only
characteristics that define cultural groups (e.g., deaf culture, inner-city
culture). Culture is a multifaceted construct, and cultural factors cannot be
understood in isolation from social, class, and personal characteristics that
make each patient unique.
278 May–June 2006 ● American Psychologist
lated question concerns the extent to which social factors
and cultural differences necessitate different forms of treat-
ment or, conversely, the extent to which interventions
widely tested in majority populations can be readily
adapted for patients with different ethnic or sociocultural
backgrounds. A third question concerns maximizing the
extent to which widely used interventions adequately at-
tend to developmental considerations, both for children and
adolescents (Weisz & Hawley, 2002) and for older adults
(Zarit & Knight, 1996). A fourth question concerns the
extent to which variable clinical presentations, such as
comorbidity and polysymptomatic presentations, moderate
the impact of interventions. Underlying all of these ques-
tions is the issue of how best to approach the treatment of
patients whose characteristics (e.g., gender, gender iden-
tity, ethnicity, race, social class, disability status, sexual
orientation) and problems (e.g., comorbidity) may differ
from those of samples studied in research. This is a matter
of active discussion in the field, and there is increasing
research attention to the generalizability and transportabil-
ity of psychological interventions.
Available data indicate that a variety of patient-related
variables influence outcomes, many of which are cross-
diagnostic characteristics such as functional status, readi-
ness to change, and level of social support (Norcross,
2002). Other patient characteristics are essential to consider
in forming and maintaining a treatment relationship and in
implementing specific interventions. These include but are
not limited to (a) variations in presenting problems or
disorders, etiology, concurrent symptoms or syndromes,
and behavior; (b) chronological age, developmental status,
developmental history, and life stage; (c) sociocultural and
familial factors (e.g., gender, gender identity, ethnicity,
race, social class, religion, disability status, family struc-
ture, sexual orientation); (d) current environmental context,
stressors (e.g., unemployment, recent life event), and social
factors (e.g., institutional racism, health care disparities);
and (e) personal preferences, values, and preferences re-
lated to treatment (e.g., goals, beliefs, worldviews, treat-
ment expectations). Available research on both patient
matching and treatment failures in clinical trials of even
highly efficacious interventions suggests that different
strategies and relationships may prove better suited for
different populations (Gamst, Dana, Der-Karaberian, &
Kramer, 2000; Groth-Marnat, Beutler, & Roberts, 2001;
Norcross, 2002; Sue, Fujino, Hu, Takeuchi, & Zane, 1991).
Many presenting symptoms—for example, depres-
sion, anxiety, school failure, and bingeing and purging—
are similar across patients. However, symptoms or disor-
ders that are phenotypically similar are often heterogeneous
with respect to etiology, prognosis, and the psychological
processes that create or maintain them. Moreover, most
patients present with multiple symptoms or syndromes
rather than a single, discrete disorder (e.g., Kessler, Stang,
Wittchen, Stein, & Walters, 1999; Newman, Moffitt, Caspi,
& Silva, 1998). The presence of concurrent conditions may
moderate treatment response, and interventions intended to
treat one symptom often affect others. An emerging body
of research also suggests that personality variables underlie
many psychiatric syndromes and account for a substantial
part of the comorbidity among syndromes widely docu-
mented in research (e.g., Brown, Chorpita, & Barlow,
1998; Krueger, 2002). Psychologists must attend to the
individual person to make the complex choices necessary
to conceptualize, prioritize, and treat multiple symptoms. It
is important to know the person who has the disorder in
addition to knowing the disorder the person has.
Individual Differences
EBPP also requires attention to factors related to the pa-
tient’s development and life-stage. An enormous body of
research exists on developmental processes (e.g., attach-
ment; socialization; cognitive, social– cognitive, gender,
moral, and emotional development) that are essential in
understanding adult psychopathology and particularly in
treating children, adolescents, families, and older adults
(e.g., American Psychological Association, 2004; Samer-
off, Lewis, & Miller, 2000; Toth & Cicchetti, 1999).
EBPP requires attention to many other patient char-
acteristics, such as gender, gender identity, culture, ethnic-
ity, race, age, family context, religious beliefs, and sexual
orientation (American Psychological Association, 2000,
2003). These variables shape personality, values, world-
views, relationships, psychopathology, and attitudes to-
ward treatment. A wide range of relevant research literature
can inform psychological practice, including ethnography,
cross-cultural psychology (e.g., Berry, Segall, & Kagitçi-
basi, 1997), cultural psychiatry (e.g., Kleinman, 1977),
psychological anthropology (e.g., LeVine, 1983; Moore &
Matthews, 2001; Strauss & Quinn, 1992), and cultural
psychotherapy (Sue, 1998; Zane, Sue, Young, Nunez, &
Hall, 2004). Culture influences not only the nature and
expression of psychopathology but also the patient’s un-
derstanding of psychological and physical health and ill-
ness. Cultural values and beliefs and social factors (e.g.,
implicit racial biases) also influence patterns of seeking,
using, and receiving help; presentation and reporting of
symptoms, fears, and expectations about treatment; and
desired outcomes. Psychologists also understand and re-
flect on the ways their own characteristics, values, and
context interact with those of the patient.
Race as a social construct is a way of grouping people
into categories on the basis of perceived physical attributes,
ancestry, and other factors. Race is also more broadly
associated with power, status, and opportunity (American
Anthropological Association, 1998). In Western cultures,
European or White “race” confers advantage and opportu-
nity, even as improved social attitudes and public policies
have reinforced social equality. Race is thus an interper-
sonal and political process with significant implications for
clinical practice and health care quality (Smedley & Smed-
ley, 2005). Patients and clinicians may “belong” to racial
groups as they choose to self-identify, but the importance
of race in clinical practice is relational rather than being
solely a patient or clinician attribute. Considerable evi-
dence from many fields (Institute of Medicine, 2003) sug-
gests that racial power differentials between clinicians and
their patients, as well as systemic biases and implicit ste-
279May–June 2006 ● American Psychologist
reotypes based on race or ethnicity, contribute to the ineq-
uitable care that patients of color receive across health care
services. Clinicians must carefully consider the impact of
race, ethnicity, and culture on the treatment process, rela-
tionship, and outcome.
The patient’s social and environmental context, in-
cluding recent and chronic stressors, is also important in
case formulation and treatment planning. Sociocultural and
familial factors, social class, and broader social, economic,
and situational factors (e.g., unemployment, family disrup-
tion, lack of insurance, recent losses, prejudice, immigra-
tion status) can have an enormous influence on mental
health, adaptive functioning, treatment seeking, and patient
resources (psychological, social, and financial).
Psychotherapy is a collaborative enterprise in which
patients and clinicians negotiate ways of working together
that are mutually agreeable and likely to lead to positive
outcomes. Thus, patient values and preferences (e.g., goals,
beliefs, preferred modes of treatment) are a central com-
ponent of EBPP. Patients can have strong preferences for
types of treatment and desired outcomes, and these prefer-
ences are influenced by both their cultural context and
individual factors. One role of the psychologist is to ensure
that patients understand the costs and benefits of different
practices and choices (Haynes, Devereaux, & Guyatt,
2002). EBPP seeks to maximize patient choice among
effective alternative interventions. Effective practice re-
quires balancing patient preferences and the psychologist’s
judgment— based on available evidence and clinical exper-
tise—to determine the most appropriate treatment.
Future Directions
Much additional research is needed regarding the influence
of patient characteristics on treatment selection, therapeutic
processes, and outcomes. Research on cross-diagnostic
characteristics, polysymptomatic presentations, and the ef-
fectiveness of psychological interventions with culturally
diverse groups is particularly important. We suggest the
following research priorities:
● patient characteristics as moderators of treatment
response in naturalistic settings;
● prospective outcome studies on treatments and re-
lationships tailored to patients’ cross-diagnostic
characteristics, including Aptitude � Treatment in-
teraction designs;
● effectiveness of interventions that have been widely
studied in the majority population with other
populations;
● examination of the nature of implicit stereotypes
held by both psychologists and patients and success-
ful interventions for minimizing their activation or
impact;
● ways to make information about culture and psy-
chotherapy more accessible to practitioners;
● maximizing psychologists’ cognitive, emotional,
and role competence with diverse patients; and
● identifying successful models of treatment decision
making in light of patient preferences.
Conclusions
EBPP is the integration of the best available research with
clinical expertise in the context of patient characteristics,
culture, and preferences. The purpose of EBPP is to pro-
mote effective psychological practice and enhance public
health by applying empirically supported principles of psy-
chological assessment, case formulation, therapeutic rela-
tionship, and intervention. Much has been learned over the
past century from basic and applied psychological research
as well as from observations and hypotheses developed in
clinical practice. Many strategies for working with patients
have emerged and been refined through the kinds of trial
and error and clinical hypothesis generation and testing that
constitute the most scientific aspect of clinical practice. Yet
clinical hypothesis testing has its limits, hence the need to
integrate clinical expertise with the best available research.
Perhaps the central message of this task force report—
and one of the most heartening aspects of the process that
led to it—is the consensus achieved among a diverse group
of scientists, clinicians, and scientist– clinicians from mul-
tiple perspectives that EBPP requires an appreciation of the
value of multiple sources of scientific evidence. In a given
clinical circumstance, psychologists of good faith and good
judgment may disagree about how best to weigh different
forms of evidence; over time, we presume that systematic
and broad empirical inquiry—in the laboratory and in the
clinic—will point the way toward best practice in integrat-
ing best evidence. What this document reflects, however, is
a reassertion of what psychologists have known for a
century: The scientific method is a way of thinking and
observing systematically, and it is the best tool we have for
learning about what works for whom.
Clinical decisions should be made in collaboration
with the patient on the basis of the best clinically relevant
evidence and with consideration for the probable costs,
benefits, and available resources and options. It is the
treating psychologist who makes the ultimate judgment
regarding a particular intervention or treatment plan. The
involvement of an active, informed patient is generally
crucial to the success of psychological services. Treatment
decisions should never be made by untrained persons un-
familiar with the specifics of a case.
The treating psychologist determines the applicability
of research conclusions to a particular patient. Individual
patients may require decisions and interventions not di-
rectly addressed by the available research. The application
of research evidence to a given patient always involves
probabilistic inferences. Therefore, ongoing monitoring of
patient progress and adjustment of treatment as needed are
essential to EBPP.
Moreover, psychologists must attend to a range of
outcomes that may sometimes suggest one strategy and
sometimes another, and they must attend to the strengths
and limitations of available research vis-à-vis these differ-
ent ways of measuring success. Psychological outcomes
may include not only symptom relief and prevention of
future symptomatic episodes but also quality of life, adap-
tive functioning in work and relationships, ability to make
280 May–June 2006 ● American Psychologist
satisfying life choices, personality change, and other goals
arrived at in the collaboration between patient and
clinician.
EBPP is a means to enhance the delivery of services to
patients within an atmosphere of mutual respect, open
communication, and collaboration among all stakeholders,
including practitioners, researchers, patients, health care
managers, and policymakers. Our goal in this document,
and in the deliberations of the Task Force that led to it, was
to set both an agenda and a tone for the next steps in the
evolution of EBPP.
REFERENCES
Ackerman, S. J., & Hilsenroth, M. J. (2003). A review of therapist
characteristics and techniques positively impacting the therapeutic al-
liance. Clinical Psychology Review, 23, 1–33.
American Anthropological Association. (1998, May 17). American An-
thropological Association statement on “race.” Retrieved June 11,
2005, from http://www.aaanet.org/stmts/racepp.htm
American Psychological Association. (1995). Template for developing
guidelines: Interventions for mental disorders and psychosocial aspects
of physical disorders. Washington, DC: Author.
American Psychological Association. (2000). Guidelines for psychother-
apy with lesbian, gay, and bisexual clients. American Psychologist, 55,
1440 –1451.
American Psychological Association. (2002). Criteria for evaluating treat-
ment guidelines. American Psychologist, 57, 1052–1059.
American Psychological Association. (2003). Guidelines on multicultural
education, training, research, practice, and organizational change for
psychologists. American Psychologist, 58, 377– 402.
American Psychological Association. (2004). Guidelines for psychologi-
cal practice with older adults. American Psychologist, 59, 236 –260.
Arnkoff, D. B., Glass, C. R., & Shapiro, S. J. (2002). Expectations and
preferences. In J. C. Norcross (Ed.), Psychotherapy relationships that
work: Therapist contributions and responsiveness to patients (pp. 335–
356). New York: Oxford University Press.
Barlow, D. H. (1996). The effectiveness of psychotherapy: Science and
policy. Clinical Psychology: Science and Practice, 1, 109 –122.
Barlow, D. H. (2004). Psychological treatments. American Psychologist,
59, 869 – 879.
Bédard, J., & Chi, M. T. (1992). Expertise. Current Directions in Psy-
chological Science, 1, 135–139.
Berry, J. W., Segall, M. H., & Kagitçibasi, C. (Eds.). (1997). Handbook of
cross-cultural psychology, Vol. 3: Social behavior and applications
(2nd ed.). Boston: Allyn & Bacon.
Beutler, L. E. (1998). Identifying empirically supported treatments: What
if we didn’t? Journal of Consulting and Clinical Psychology, 66,
113–120.
Beutler, L. E., Alomohamed, S., Moleiro, C., & Romanelli, R. K. (2002).
Systemic treatment selection and prescriptive therapy. In F. W. Kaslow
(Series Ed.) & J. Lebow (Vol. Ed.), Comprehensive handbook of
psychotherapy, Vol. 4: Integrative/eclectic (pp. 255–271). New York:
Wiley.
Blatt, S. J., Shahar, G., & Zurhoff, D. C. (2002). Anaclitic/sociotropic and
introjective/autonomous dimensions. In J. C. Norcross (Ed.), Psycho-
therapy relationships that work: Therapist contributions and respon-
siveness to patients (pp. 315–333). New York: Oxford University Press.
Bohart, A., & Tallman, K. (1999). How clients make therapy work: The
process of active self-healing. Washington, DC: American Psycholog-
ical Association.
Bransford, D., Brown, A. L., & Cocking, R. R. (Eds.). (1999). How people
learn: Brain, mind, experience, and school. Washington, DC: National
Academies Press.
Brown, T. A., Chorpita, B. F., & Barlow, D. H. (1998). Structural
relationships among dimensions of the DSM–IV anxiety and mood
disorders and dimensions of negative affect, positive affect, and auto-
nomic arousal. Journal of Abnormal Psychology, 107, 179 –192.
Carpinello, S. E., Rosenberg, L., Stone, J., Schwager, M., & Felton, C. J.
(2002). New York State’s campaign to implement evidence-based
practices for people with serious mental disorders. Psychiatric Services,
53, 153–155.
Castonguay, L. G., Goldfried, M. R., Wiser, S., Raue, P. J., & Hayes,
A. M. (1996). Predicting the effect of cognitive therapy for depression:
A study of unique and common factors. Journal of Consulting and
Clinical Psychology, 64, 497–504.
Chambless, D. L., Baker, M. J., Baucom, D. H., Beutler, L. E., Calhoun,
K. S., Crits-Cristoph, P., et al. (1998). Update on empirically validated
therapies, II. The Clinical Psychologist, 51(1), 3–16.
Chambless, D. L., Sanderson, W. C., Shoham, V., Bennett Johnson, S.,
Pope, K. S., Crits-Cristoph, P., et al. (1996). An update on empirically
validated therapies. The Clinical Psychologist, 49(2), 5–18.
Chiles, J. A., Lambert, M. J., & Hatch, A. L. (2002). Medical cost offset:
A review of the impact of psychological interventions on medical
utilization over the past three decades. In N. A. Cummings, W. T.
O’Donohue, & K. E. Ferguson (Eds.), The impact of medical cost offset
on practice and research (pp. 47–56). Reno, NV: Context Press.
Chorpita, B. F., Yim, L. M., Donkervoet, J. C., Arensdorf, A., Amundsen,
M. J., McGee, C., et al. (2002). Toward large-scale implementation of
empirically supported treatments for children: A review and observa-
tions by the Hawaii Empirical Basis to Services Task Force. Clinical
Psychology: Science and Practice, 9, 165–190.
Clarkin, J. F., & Levy, K. N. (2004). The influence of client variables on
psychotherapy. In M. J. Lambert (Ed.), Bergin and Garfield’s handbook
of psychotherapy and behavior change (5th ed., pp. 194 –226). New
York: Wiley.
Crits-Christoph, P., Baranackie, K., Kurcias, J. S., Carroll, K., Luborsky,
L., McLellan, T., et al. (1991). Meta-analysis of therapist effects in
psychotherapy outcome studies. Psychotherapy Research, 1, 81–91.
Davidson, K. W., Trudeau, K. J., Ockene, J. K., Orleans, C. T., & Kaplan,
R. M. (2003). A primer on current evidence-based review systems and
their implications for behavioral medicine. Annals of Behavioral Med-
icine, 26, 161–171.
Dawes, R. M., Faust, D., & Meehl, P. E. (2002). Clinical versus actuarial
judgment. In T. Gilovich & D. Griffin (Eds.), Heuristics and biases:
The psychology of intuitive judgment (pp. 716 –729). New York: Cam-
bridge University Press.
Ditto, P. H., & Lopez, D. F. (1992). Motivated skepticism: Use of
differential decision criteria for preferred and nonpreferred conclusions.
Journal of Personality and Social Psychology, 62, 568 –584.
Ditto, P. H., Munro, G. D., Apanovitch, A. M., Scepansky, J. A., &
Lockhart, L. K. (2003). Spontaneous skepticism: The interplay of
motivation and expectation in responses to favorable and unfavorable
medical diagnoses. Personality and Social Psychology Bulletin, 29,
1120 –1132.
Duffy, M. (Ed.). (1999). Handbook of counseling and psychotherapy with
older adults. New York: Wiley
Eells, T. D., Lombart, K. G., Kendjelic, E. M., Turner, L. C., & Lucas, C.
(2005). The quality of psychotherapy case formulations: A comparison
of expert, experienced, and novice cognitive– behavioral and psychody-
namic therapists. Journal of Consulting and Clinical Psychology, 73,
579 –589.
Gambrill, E. (2005). Critical thinking in clinical practice: Improving the
accuracy of judgments and decisions (2nd ed.). Hoboken, NJ: Wiley.
Gamst, G., Dana, R. H., Der-Karaberian, A., & Kramer, T. (2000). Ethnic
match and patient ethnicity effects on global assessment and visitation.
Journal of Community Psychology, 28, 547–564.
Garb, H. N. (1998). Clinical judgment. In H. N. Garb (Ed.), Studying the
clinician: Judgment research and psychological assessment (pp. 173–
206). Washington, DC: American Psychological Association.
Goodheart, C. D. (2006). Evidence, endeavor, and expertise in psychology
practice. In C. D. Goodheart, A. E. Kazdin, & R. J. Sternberg (Eds.),
Evidence-based psychotherapy: Where practice and research meet (pp.
37– 61). Washington, DC: American Psychological Association.
Greenberg, L. S., & Newman, F. L. (1996). An approach to psychotherapy
change process research: Introduction to the special section. Journal of
Consulting and Clinical Psychology, 64, 435– 438.
Groth-Marnat, G., Beutler, L. E., & Roberts, R. I. (2001). Client charac-
teristics and psychotherapy: Perspectives, support, interactions, and
implications for training. Australian Psychologist, 36, 115–121.
Grove, W. M., Zald, D. H., Lebow, B. S., Snitz, B. E., & Nelson, C.
281May–June 2006 ● American Psychologist
(2000). Clinical versus mechanical prediction: A meta-analysis. Psy-
chological Assessment, 12, 19 –30.
Hall, G. C. N. (2001). Psychotherapy research with ethnic minorities:
Empirical, ethical, and conceptual issues. Journal of Consulting and
Clinical Psychology, 69, 502–510.
Hayes, S. C., Barlow, D. H., & Nelson-Gray, R. O. (1999). The scientist
practitioner: Research and accountability in the age of managed care
(2nd ed.). Boston: Allyn & Bacon.
Haynes, R. B., Devereaux, P. J., & Guyatt, G. H. (2002). Clinical expertise
in the era of evidence-based medicine and patient choice. Evidence-
Based Medicine, 7, 36 –38.
Henry, W. P., Schacht, T. E., Strupp, H. H., Butler, S. F., & Binder, J. L.
(1993). Effects of training in time-limited dynamic psychotherapy:
Changes in therapist behavior. Journal of Consulting and Clinical
Psychology, 61, 434 – 440.
Hollon, S. D., Stewart, M. O., & Strunk, D. (2006). Enduring effects for
cognitive behavior therapy in the treatment of depression and anxiety.
Annual Review of Psychology, 57, 285–315.
Horvath, A. O., & Bedi, R. P. (2002). The alliance. In J. C. Norcross (Ed.),
Psychotherapy relationships that work: Therapist contributions and
responsiveness to patients (pp. 37–70). New York: Oxford University
Press.
Huppert, J. D., Bufka, L. F., Barlow, D. H., Gorman, J. M., Shear, M. K.,
& Woods, S. W. (2001). Therapists, therapist variables, and cognitive–
behavioral therapy outcome in a multicenter trial for panic disorder.
Journal of Consulting and Clinical Psychology, 69, 747–755.
Institute of Medicine. (2001). Crossing the quality chasm: A new health
system for the 21st century. Washington, DC: National Academies
Press.
Institute of Medicine. (2003). Unequal treatment: Confronting racial and
ethnic disparities in health care (B. D. Smedley, A. Stith, & A. R.
Nelson, Eds.). Washington, DC: National Academies Press.
Kazdin, A. E., & Weisz, J. R. (Eds.). (2003). Evidence-based psychother-
apies for children and adolescents. New York: Guilford Press.
Kessler, R. C., Stang, P., Wittchen, H. U., Stein, M., & Walters, E. E.
(1999). Lifetime co-morbidities between social phobia and mood dis-
orders in the US National Comorbidity Survey. Psychological Medi-
cine, 29, 555–567.
Kim, D., Wampold, B. E. & Bolt, D. M. (in press). Therapist effects in
psychotherapy: A random effects modeling of the NIMH TDCRP data.
Psychotherapy Research.
Kleinman, A. M. (1977). Depression, somatization and the “new cross-
cultural psychiatry.” Social Science and Medicine, 11, 3–10.
Krueger, R. F. (2002). Psychometric perspectives on comorbidity. In J. E.
Helzer & J. J. Hudziak (Eds.), Defining psychopathology in the 21st
century: DSM–V and beyond (pp. 41–54). Washington, DC: American
Psychiatric Press.
Kunda, Z. (1990). The case for motivated reasoning. Psychological Bul-
letin, 108, 480 – 498.
Lambert, M. J., Bergin, A. E., & Garfield, S. L. (2004). Introduction and
historical overview. In M. J. Lambert (Ed.), Bergin and Garfield’s
handbook of psychotherapy and behavior change (5th ed., pp. 3–15).
New York: Wiley.
Lambert, M. J., Harmon, C., Slade, K., Whipple, J. L., & Hawkins, E. J.
(2005). Providing feedback to psychotherapists on their patients’
progress: Clinical results and practice suggestions. Journal of Clinical
Psychology, 61, 165–174.
Lambert, M. J., & Ogles, B. M. (2004). The efficacy and effectiveness of
psychotherapy. In M. J. Lambert (Ed.), Bergin and Garfield’s handbook
of psychotherapy and behavior change (5th ed., pp. 139 –193). New
York: Wiley.
LeVine, R. A. (1983). Fertility and child development: An anthropolog-
ical approach. New Directions for Child Development, 20, 45–55.
Lipsey, M. W., & Wilson, D. B. (2001). The way in which intervention
studies have “personality” and why it is important to meta-analysis.
Evaluation and the Health Professions, 24, 236 –254.
Martin, D. J., Garske, J. P., & Davis, M. K. (2000). Relation of the
therapeutic alliance with outcome and other variables: A meta-analytic
review. Journal of Consulting and Clinical Psychology, 68, 438 – 450.
McReynolds, P. (1997). Lightner Witmer: His life and times. Washington,
DC: American Psychological Association.
Meehl, P. E. (1954). Clinical vs. statistical prediction: A theoretical
analysis and a review of the evidence. Minneapolis: University of
Minnesota Press.
Miller, S. D., Duncan, B. L., & Hubble, M. A. (2005). Outcome-informed
clinical work. In J. C. Norcross & M. R. Goldfried (Eds.), Handbook of
psychotherapy integration (2nd ed., pp. 84 –102). London: Oxford
University Press.
Miller, W. R., & Rollnick, S. (2002). Motivational interviewing: Prepar-
ing people for change (2nd ed.). New York: Guilford Press.
Moore, C. C., & Mathews, H. F. (Eds.). (2001). The psychology of
cultural experience. Cambridge, England: Cambridge University Press.
Muñoz, R. F., Hollon, S. D., McGrath, E., Rehm, L. P., & VandenBos,
G. R. (1994). On the AHCPR Depression in Primary Care guidelines:
Further considerations for practitioners. American Psychologist, 49,
42– 61.
Nathan, P. E. (1998). Practice guidelines: Not yet ideal. American Psy-
chologist, 53, 290 –299.
Nathan, P. E., & Gorman, J. M. (Eds.). (2002). A guide to treatments that
work (2nd ed.). London: Oxford University Press.
National Institutes of Health. (2004). State implementation of evidence-
based practices: Bridging science and service (National Institute of
Mental Health and Substance Abuse and Mental Health Services Ad-
ministration Request for Application MH-03– 007). Retrieved Novem-
ber 19, 2004, from http://grants1.nih.gov/grants/guide/rfa-files/RFA-
MH-03– 007.html
Newman, D. L., Moffitt, T. E., Caspi, A., & Silva, P. A. (1998). Comorbid
mental disorders: Implications for treatment and sample selection.
Journal of Abnormal Psychology, 107, 305–311.
Norcross, J. C. (2001). Purposes, processes, and products of the task force
on empirically supported therapy relationships. Psychotherapy: Theory,
Research, Practice, Training, 38, 345–356.
Norcross, J. C. (Ed.). (2002). Psychotherapy relationships that work:
Therapist contributions and responsiveness to patient needs. New
York: Oxford University Press.
Norcross, J. C., Beutler, L. E., & Levant, R. F. (Eds.). (2005). Evidence-
based practices in mental health: Debate and dialogue on the
fundamental questions. Washington, DC: American Psychological
Association.
Norcross, J. C., & Goldfried, M. R. (Eds.). (2005). Handbook of psycho-
therapy integration (2nd ed.). New York: Oxford University Press.
Persons, J. B. (1991). Psychotherapy outcome studies do not accurately
represent current models of psychotherapy: A proposed remedy. Amer-
ican Psychologist, 46, 99 –106.
Prochaska, J. O., Norcross, J. C., & DiClemente, C. C. (1994). Changing
for good. New York: Morrow.
Project MATCH Research Group. (1998). Therapist effects in three treat-
ments for alcohol problems. Psychotherapy Research, 8, 455– 474.
Reed, G. M., & Eisman, E. (2006). Uses and misuses of evidence:
Managed care, treatment guidelines, and outcomes measurement in
professional practice. In C. D. Goodheart, A. E. Kazdin, & R. J.
Sternberg (Eds.), Evidence-based psychotherapy: Where practice and
research meet (pp. 13–35). Washington, DC: American Psychological
Association.
Rosenthal, R. (1990). How are we doing in soft psychology? American
Psychologist, 45, 775–777.
Roth, A., & Fonagy, P. (2004). What works for whom? A critical review
of psychotherapy research (2nd ed.). New York: Guilford Press.
Sackett, D. L., Rosenberg, W. M., Gray, J. A., Haynes, R. B., & Rich-
ardson, W. S. (1996). Evidence based medicine: What it is and what it
isn’t. British Medical Journal, 312, 71–72.
Sackett, D. L., Straus, S. E., Richardson, W. S., Rosenberg, W., &
Haynes, R. B. (2000). Evidence based medicine: How to practice and
teach EBM (2nd ed.). London: Churchill Livingstone.
Sameroff, A. J., Lewis, M., & Miller, S. M., (Eds.). (2000). Handbook of
developmental psychopathology (2nd ed.). Dordrecht, the Netherlands:
Kluwer Academic.
Shakow, D., Hilgard, E. R., Kelly, E. L., Luckey, B., Sanford, R. N., &
Shaffer, L. F. (1947). Recommended graduate training program in
clinical psychology. American Psychologist, 2, 539 –558.
Shirk, S. R., & Karver, M. (2003). Prediction of treatment outcome from
relationship variables in child and adolescent therapy: A meta-analytic
review. Journal of Consulting and Clinical Psychology, 71, 452– 464.
Skovholt, T. M., & Jennings, L. (2004). Master therapists: Exploring
282 May–June 2006 ● American Psychologist
expertise in therapy and counseling. Needham Heights, MA: Allyn &
Bacon.
Smedley, A., & Smedley, B. D. (2005). Race as biology is fiction, racism
as a social problem is real: Anthropological and historical perspectives
on the social construction of race. American Psychologist, 60, 16 –26.
Smith, M. L., & Glass, G. V (1977). Meta-analysis of psychotherapy
outcome studies. American Psychologist, 32, 752–760.
Smith, M. L., Glass, G. V, & Miller, T. L. (1980). The benefits of
psychotherapy. Baltimore: Johns Hopkins University Press.
Sox, H. C., Jr., & Woolf, S. H. (1993). Evidence-based practice guidelines
from the U.S. Preventive Services Task Force. Journal of the American
Medical Association, 169, 2678.
Strauss, C., & Quinn, N. (1992). Preliminaries to a theory of culture
acquisition. In H. L. Pick Jr., P. W. van den Broek, & D. C. Knill (Eds.),
Cognition: Conceptual and methodological issues (pp. 267–294).
Washington, DC: American Psychological Association.
Stricker, G., Abrahamson, D. J., Bologna, N. C., Hollon, S. D., Robinson,
E. A., & Reed, G. M. (1999). Treatment guidelines: The good, the bad,
and the ugly. Psychotherapy, 36, 69 –79.
Stricker, G., & Trierweiler, S. J. (1995). The local clinical scientist: A
bridge between science and practice. American Psychologist, 50, 995–
1002.
Sue, S. (1998). In search of cultural competence in psychotherapy and
counseling. American Psychologist, 53, 440 – 448.
Sue, S., Fujino, D. C., Hu, L. T., Takeuchi, D. T., & Zane, N. W. S.
(1991). Community mental health services for ethnic minority groups:
A test of the cultural responsiveness hypothesis. Journal of Counseling
Psychology, 59, 533–540.
Sue, S., & Lam, A. G. (2002). Cultural and demographic diversity. In J. C.
Norcross (Ed.), Psychotherapy relationships that work: Therapist con-
tributions and responsiveness to patients (pp. 401– 421). New York:
Oxford University Press.
Sue, S., Zane, N., & Young, K. (1994). Research on psychotherapy with
culturally diverse populations. In A. E. Bergin & S. L. Garfield (Eds.),
Handbook of psychotherapy and behavior change (4th ed., pp. 783–
817). New York: Wiley.
Tanenbaum, S. J. (2005). Evidence-based practice as mental health policy:
Three controversies and a caveat. Health Affairs, 24, 163–173.
Thorne, F. C. (1947). The clinical method in science. American Psychol-
ogist, 2, 159 –166.
Toth, S. L., & Cicchetti, D. (1999). Developmental psychopathology and
child psychotherapy. In S. W. Russ & T. H. Ollendick (Eds.), Hand-
book of psychotherapies with children and families (pp. 15– 44). Dor-
drecht, the Netherlands: Kluwer Academic.
Wampold, B. E. (2001). The great psychotherapy debate: Models, meth-
ods, and findings. Mahwah, NJ: Erlbaum.
Wampold, B. E., & Brown, G. (2005). Estimating therapist variability in
outcomes attributable to therapists: A naturalistic study of outcomes in
managed care. Journal of Consulting and Clinical Psychology, 73,
914 –923.
Wampold, B. E., Lichtenberg, J. W., & Waehler, C. A. (2002). Principles
of empirically supported interventions in counseling psychology. Coun-
seling Psychologist, 30, 197–217.
Wampold, B. E., Mondin, G. W., Moody, M., Stich, F., Benson, K., &
Ahn, H. (1997). A meta-analysis of outcome studies comparing bona
fide psychotherapies: Empirically, “all must have prizes.” Psychologi-
cal Bulletin, 122, 203–215.
Weinberger, J. (1995). Common factors aren’t so common: The common
factors dilemma. Clinical Psychology: Science and Practice, 2, 45– 69.
Weisz, J. R., & Hawley, K. M. (2002). Developmental factors in the
treatment of adolescents. Journal of Consulting and Clinical Psychol-
ogy, 70, 21– 43.
Weisz, J. R., Hawley, K. M., & Doss, A. J. (2004). Empirically tested
psychotherapies for youth internalizing and externalizing problems and
disorders. Child and Adolescent Psychiatric Clinics of North America,
13, 729 – 815.
Weisz, J. R., Jensen, A. L., & McLeod, B. D. (2005). Development and
dissemination of child and adolescent psychotherapies: Milestones,
methods, and a new deployment-focused model. In E. D. Hibbs & P. S.
Jensen (Eds.), Psychosocial treatments for child and adolescent disor-
ders: Empirically based strategies for clinical practice (2nd ed., pp.
9 –39). Washington, DC: American Psychological Association.
Westen, D., Novotny, C. M., & Thompson-Brenner, H. (2004). Empirical
status of empirically supported psychotherapies: Assumptions, findings,
and reporting in controlled clinical trials. Psychological Bulletin, 130,
631– 663.
Westen, D., & Weinberger, J. (2004). When clinical description becomes
statistical prediction. American Psychologist, 59, 595– 613.
Westen, D., & Weinberger, J. (2005). In praise of clinical judgment:
Meehl’s forgotten legacy. Journal of Clinical Psychology, 61, 1257–
1276.
Witmer, L. (1996). Clinical psychology. American Psychologist, 51, 248 –
251. (Original work published 1907)
Woolf, S. H., & Atkins, D. A. (2001). The evolving role of prevention in
health care: Contributions of the U.S. Preventive Service Task Force.
American Journal of Preventive Medicine, 29(3, Suppl.), 13–20.
Yates, B. T. (1994). Toward the incorporation of costs, cost-effectiveness
analysis, and cost– benefit analysis into clinical research. Journal of
Consulting and Clinical Psychology, 62, 729 –736.
Zane, N., Sue, S., Young, K., Nunez, J., & Hall, G. N. (2004). Research
on psychotherapy with culturally diverse populations. In M. J. Lambert
(Ed.), Bergin and Garfield’s handbook of psychotherapy and behavior
change (5th ed., pp. 767– 804). New York: Wiley.
Zarit, S. H., & Knight, B. G. (Eds.). (1996). A guide to psychotherapy and
aging: Effective clinical interventions in a life-stage context. Washing-
ton, DC: American Psychological Association.
(Appendix follows)
283May–June 2006 ● American Psychologist
Appendix
American Psychological Association Policy Statement on Evidence-Based
Practice in Psychology
The following statement was approved as policy of the
American Psychological Association (APA) by the APA
Council of Representatives during its August 2005 meeting.
Evidence-based practice in psychology (EBPP) is the
integration of the best available research with clinical ex-
pertise in the context of patient characteristics, culture, and
preferences.A1 This definition of EBPP closely parallels the
definition of evidence-based practice adopted by the Institute
of Medicine (2001, p. 147) as adapted from Sackett and
colleagues (2000): “Evidence-based practice is the integration
of best research evidence with clinical expertise and patient
values.” The purpose of EBPP is to promote effective psy-
chological practice and enhance public health by applying
empirically supported principles of psychological assessment,
case formulation, therapeutic relationship, and intervention.
Best Research Evidence
Best research evidence refers to scientific results related to
intervention strategies, assessment, clinical problems, and
patient populations in laboratory and field settings as well
as to clinically relevant results of basic research in psy-
chology and related fields. A sizable body of evidence
drawn from a variety of research designs and methodolo-
gies attests to the effectiveness of psychological practices.
Generally, evidence derived from clinically relevant re-
search on psychological practices should be based on sys-
tematic reviews, reasonable effect sizes, statistical and clin-
ical significance, and a body of supporting evidence. The
validity of conclusions from research on interventions is
based on a general progression from clinical observation
through systematic reviews of randomized clinical trials,
while also recognizing gaps and limitations in the existing
literature and its applicability to the specific case at hand
(APA, 2002). Health policy and practice are also informed
by research using a variety of methods in such areas as
public health, epidemiology, human development, social
relations, and neuroscience.
Researchers and practitioners should join together to
ensure that the research available on psychological practice
is both clinically relevant and internally valid. It is impor-
tant not to assume that interventions that have not yet been
studied in controlled trials are ineffective. However, widely
used psychological practices as well as innovations devel-
oped in the field or laboratory should be rigorously evalu-
ated and barriers to conducting this research should be
identified and addressed.
Clinical Expertise
Psychologists’ clinical expertise encompasses a number of
competencies that promote positive therapeutic outcomes.
These competencies include a) conducting assessments and
developing diagnostic judgments, systematic case formu-
lations, and treatment plans; b) making clinical decisions,
implementing treatments, and monitoring patient progress;
c) possessing and using interpersonal expertise, including
the formation of therapeutic alliances; d) continuing to
self-reflect and acquire professional skills; e) evaluating
and using research evidence in both basic and applied
psychological science; f) understanding the influence of
individual, cultural, and contextual differences on treat-
ment; g) seeking available resources (e.g., consultation,
adjunctive or alternative services) as needed; and h) having
a cogent rationale for clinical strategies. Expertise develops
from clinical and scientific training, theoretical understand-
ing, experience, self-reflection, knowledge of current re-
search, and continuing education and training.
Clinical expertise is used to integrate the best research
evidence with clinical data (e.g., information about the
patient obtained over the course of treatment) in the context
of the patient’s characteristics and preferences to deliver
services that have a high probability of achieving the goals
of treatment. Integral to clinical expertise is an awareness
of the limits of one’s knowledge and skills and attention to
the heuristics and biases— both cognitive and affective—
that can affect clinical judgment. Moreover, psychologists
understand how their own characteristics, values, and con-
text interact with those of the patient.
Patients’ Characteristics, Values, and
Context
Psychological services are most effective when respon-
sive to the patient’s specific problems, strengths, person-
ality, sociocultural context, and preferences. Many pa-
tient characteristics, such as functional status, readiness
to change, and level of social support, are known to be
related to therapeutic outcomes. Other important patient
characteristics to consider in forming and maintaining a
treatment relationship and in implementing specific in-
terventions include a) variations in presenting problems
or disorders, etiology, concurrent symptoms or syn-
dromes, and behavior; b) chronological age, develop-
mental status, developmental history, and life stage; c)
sociocultural and familial factors (e.g., gender, gender
identity, ethnicity, race, social class, religion, disability
status, family structure, and sexual orientation); d) en-
vironmental context (e.g., institutional racism, health
care disparities) and stressors (e.g., unemployment, ma-
A1To be consistent with discussions of evidence-based practice in other
areas of health care, we use the term patient to refer to the child,
adolescent, adult, older adult, couple, family, group, organization, com-
munity, or other population receiving psychological services. However,
we recognize that in many situations there are important and valid reasons
for using such terms as client, consumer, or person in place of patient to
describe the recipient of services.
284 May–June 2006 ● American Psychologist
jor life events); and e) personal preferences, values, and
preferences related to treatment (e.g., goals, beliefs,
worldviews, and treatment expectations). Some effec-
tive treatments involve interventions directed toward
others in the patient’s environment, such as parents,
teachers, and caregivers. A central goal of EBPP is to
maximize patient choice among effective alternative
interventions.
Clinical Implications
Clinical decisions should be made in collaboration with the
patient, based on the best clinically relevant evidence, and
with consideration for the probable costs, benefits, and
available resources and options.A2 It is the treating psy-
chologist who makes the ultimate judgment regarding a
particular intervention or treatment plan. The involvement
of an active, informed patient is generally crucial to the
success of psychological services. Treatment decisions
should never be made by untrained persons unfamiliar with
the specifics of the case.
The treating psychologist determines the applicability
of research conclusions to a particular patient. Individual
patients may require decisions and interventions not di-
rectly addressed by the available research. The application
of research evidence to a given patient always involves
probabilistic inferences. Therefore, ongoing monitoring of
patient progress and adjustment of treatment as needed are
essential to EBPP.
APA encourages the development of health care pol-
icies that reflect this view of evidence-based psychological
practice.
A2For some patients (e.g., children and youth), the referral, choice of
therapist and treatment, and decision to end treatment are most often made
by others (e.g., parents) rather than by the individual who is the target of
treatment. This means that the integration of evidence and practice in such
cases is likely to involve information sharing and decision making in
concert with others.
285May–June 2006 ● American Psychologist
Reporting Standards for Research in Psychology
Why Do We Need Them? What Might They Be?
APA Publications and Communications Board Working Group on Journal Article Reporting Standards
In anticipation of the impending revision of the Publication
Manual of the American Psychological Association, APA’s
Publications and Communications Board formed the
Working Group on Journal Article Reporting Standards
(JARS) and charged it to provide the board with back-
ground and recommendations on information that should
be included in manuscripts submitted to APA journals that
report (a) new data collections and (b) meta-analyses. The
JARS Group reviewed efforts in related fields to develop
standards and sought input from other knowledgeable
groups. The resulting recommendations contain (a) stan-
dards for all journal articles, (b) more specific standards
for reports of studies with experimental manipulations or
evaluations of interventions using research designs involv-
ing random or nonrandom assignment, and (c) standards
for articles reporting meta-analyses. The JARS Group an-
ticipated that standards for reporting other research de-
signs (e.g., observational studies, longitudinal studies)
would emerge over time. This report also (a) examines
societal developments that have encouraged researchers to
provide more details when reporting their studies, (b) notes
important differences between requirements, standards,
and recommendations for reporting, and (c) examines ben-
efits and obstacles to the development and implementation
of reporting standards.
Keywords: reporting standards, research methods, meta-
analysis
The American Psychological Association (APA)Working Group on Journal Article Reporting Stan-dards (the JARS Group) arose out of a request for
information from the APA Publications and Communica-
tions Board. The Publications and Communications Board
had previously allowed any APA journal editor to require
that a submission labeled by an author as describing a
randomized clinical trial conform to the CONSORT (Con-
solidated Standards of Reporting Trials) reporting guide-
lines (Altman et al., 2001; Moher, Schulz, & Altman,
2001). In this context, and recognizing that APA was about
to initiate a revision of its Publication Manual (American
Psychological Association, 2001), the Publications and
Communications Board formed the JARS Group to provide
itself with input on how the newly developed reporting
standards related to the material currently in its Publication
Manual and to propose some related recommendations for
the new edition.
The JARS Group was formed of five current and
previous editors of APA journals. It divided its work into
six stages:
● establishing the need for more well-defined report-
ing standards,
● gathering the standards developed by other related
groups and professional organizations relating to
both new data collections and meta-analyses,
● drafting a set of standards for APA journals,
● sharing the drafted standards with cognizant others,
● refining the standards yet again, and
● addressing additional and unresolved issues.
This article is the report of the JARS Group’s findings
and recommendations. It was approved by the Publications
and Communications Board in the summer of 2007 and
again in the spring of 2008 and was transmitted to the task
force charged with revising the Publication Manual for
consideration as it did its work. The content of the report
roughly follows the stages of the group’s work. Those
wishing to move directly to the reporting standards can go
to the sections titled Information for Inclusion in Manu-
scripts That Report New Data Collections and Information
for Inclusion in Manuscripts That Report Meta-Analyses.
Why Are More Well-Defined
Reporting Standards Needed?
The JARS Group members began their work by sharing
with each other documents they knew of that related to
reporting standards. The group found that the past decade
had witnessed two developments in the social, behavioral,
and medical sciences that encouraged researchers to pro-
vide more details when they reported their investigations.
The first impetus for more detail came from the worlds of
policy and practice. In these realms, the call for use of
“evidence-based” decision making had placed a new em-
phasis on the importance of understanding how research
The Working Group on Journal Article Reporting Standards was com-
posed of Mark Appelbaum, Harris Cooper (Chair), Scott Maxwell, Arthur
Stone, and Kenneth J. Sher. The working group wishes to thank members
of the American Psychological Association’s (APA’s) Publications and
Communications Board, the APA Council of Editors, and the Society for
Research Synthesis Methodology for comments on this report and the
standards contained herein.
Correspondence concerning this report should be addressed to Harris
Cooper, Department of Psychology and Neuroscience, Duke University,
Box 90086, Durham, NC 27708-0739. E-mail: cooperh@duke.edu
839December 2008 ● American Psychologist
Copyright 2008 by the American Psychological Association 0003-066X/08/$12.00
Vol. 63, No. 9, 839 – 851 DOI: 10.1037/0003-066X.63.9.839
was conducted and what it found. For example, in 2006, the
APA Presidential Task Force on Evidence-Based Practice
defined the term evidence-based practice to mean “the
integration of the best available research with clinical
expertise” (p. 273; italics added). The report went on to say
that “evidence-based practice requires that psychologists
recognize the strengths and limitations of evidence ob-
tained from different types of research” (p. 275).
In medicine, the movement toward evidence-based
practice is now so pervasive (see Sackett, Rosenberg, Muir
Grey, Hayes & Richardson, 1996) that there exists an
international consortium of researchers (the Cochrane Col-
laboration; http://www.cochrane.org/index.htm) producing
thousands of papers examining the cumulative evidence on
everything from public health initiatives to surgical proce-
dures. Another example of accountability in medicine, and
the importance of relating medical practice to solid medical
science, comes from the member journals of the Interna-
tional Committee of Medical Journal Editors (2007), who
adopted a policy requiring registration of all clinical trials
in a public trials registry as a condition of consideration for
publication.
In education, the No Child Left Behind Act of 2001
(2002) required that the policies and practices adopted by
schools and school districts be “scientifically based,” a term
that appears over 100 times in the legislation. In public policy,
a consortium similar to that in medicine now exists (the
Campbell Collaboration; http://www.campbellcollaboration.
org), as do organizations meant to promote government poli-
cymaking based on rigorous evidence of program effective-
ness (e.g., the Coalition for Evidence-Based Policy; http://
www.excelgov.org/index.php?keyword�a432fbc34d71c7).
Each of these efforts operates with a definition of what con-
stitutes sound scientific evidence. The developers of previous
reporting standards argued that new transparency in reporting
is needed so that judgments can be made by users of evidence
about the appropriate inferences and applications derivable
from research findings.
The second impetus for more detail in research report-
ing has come from within the social and behavioral science
disciplines. As evidence about specific hypotheses and
theories accumulates, greater reliance is being placed on
syntheses of research, especially meta-analyses (Cooper,
2009; Cooper, Hedges, & Valentine, 2009), to tell us what
we know about the workings of the mind and the laws of
behavior. Different findings relating to a specific question
examined with various research designs are now mined by
second users of the data for clues to the mediation of basic
psychological, behavioral, and social processes. These
clues emerge by clustering studies based on distinctions in
their methods and then comparing their results. This syn-
thesis-based evidence is then used to guide the next gen-
eration of problems and hypotheses studied in new data
collections. Without complete reporting of methods and
results, the utility of studies for purposes of research syn-
thesis and meta-analysis is diminished.
The JARS Group viewed both of these stimulants to
action as positive developments for the psychological sci-
ences. The first provides an unprecedented opportunity for
psychological research to play an important role in public
and health policy. The second promises a sounder evidence
base for explanations of psychological phenomena and a
next generation of research that is more focused on resolv-
ing critical issues.
The Current State of the Art
Next, the JARS Group collected efforts of other social and
health organizations that had recently developed reporting
standards. Three recent efforts quickly came to the group’s
attention. Two efforts had been undertaken in the medical
and health sciences to improve the quality of reporting of
primary studies and to make reports more useful for the
next users of the data. The first effort is called CONSORT
(Consolidated Standards of Reporting Trials; Altman et al.,
2001; Moher et al., 2001). The CONSORT standards were
developed by an ad hoc group primarily composed of
biostatisticians and medical researchers. CONSORT relates
to the reporting of studies that carried out random assign-
ment of participants to conditions. It comprises a checklist
of study characteristics that should be included in research
reports and a flow diagram that provides readers with a
description of the number of participants as they progress
through the study—and by implication the number who
drop out—from the time they are deemed eligible for
inclusion until the end of the investigation. These guide-
lines are now required by the top-tier medical journals and
many other biomedical journals. Some APA journals also
use the CONSORT guidelines.
The second effort is called TREND (Transparent Re-
porting of Evaluations with Nonexperimental Designs; Des
Jarlais, Lyles, Crepaz, & the TREND Group, 2004).
TREND was developed under the initiative of the Centers
for Disease Control, which brought together a group of
editors of journals related to public health, including sev-
eral journals in psychology. TREND contains a 22-item
checklist, similar to CONSORT, but with a specific focus
on reporting standards for studies that use quasi-experi-
mental designs, that is, group comparisons in which the
groups were established using procedures other than ran-
dom assignment to place participants in conditions.
In the social sciences, the American Educational Re-
search Association (2006) recently published “Standards
for Reporting on Empirical Social Science Research in
AERA Publications.” These standards encompass a broad
range of research designs, including both quantitative and
qualitative approaches, and are divided into eight general
areas, including problem formulation; design and logic of
the study; sources of evidence; measurement and classifi-
cation; analysis and interpretation; generalization; ethics in
reporting; and title, abstract, and headings. They contain
about two dozen general prescriptions for the reporting of
studies as well as separate prescriptions for quantitative and
qualitative studies.
Relation to the APA Publication Manual
The JARS Group also examined previous editions of the
APA Publication Manual and discovered that for the last
840 December 2008 ● American Psychologist
half century it has played an important role in the estab-
lishment of reporting standards. The first edition of the
APA Publication Manual, published in 1952 as a supple-
ment to Psychological Bulletin (American Psychological
Association, Council of Editors, 1952), was 61 pages long,
printed on 6-in. by 9-in. paper, and cost $1. The principal
divisions of manuscripts were titled Problem, Method, Re-
sults, Discussion, and Summary (now the Abstract). Ac-
cording to the first Publication Manual, the section titled
Problem was to include the questions asked and the reasons
for asking them. When experiments were theory-driven, the
theoretical propositions that generated the hypotheses were
to be given, along with the logic of the derivation and a
summary of the relevant arguments. The method was to be
“described in enough detail to permit the reader to repeat
the experiment unless portions of it have been described in
other reports which can be cited” (p. 9). This section was to
describe the design and the logic of relating the empirical
data to theoretical propositions, the subjects, sampling and
control devices, techniques of measurement, and any ap-
paratus used. Interestingly, the 1952 Manual also stated,
“Sometimes space limitations dictate that the method be
described synoptically in a journal, and a more detailed
description be given in auxiliary publication” (p. 25). The
Results section was to include enough data to justify the
conclusions, with special attention given to tests of statis-
tical significance and the logic of inference and generali-
zation. The Discussion section was to point out limitations
of the conclusions, relate them to other findings and widely
accepted points of view, and give implications for theory or
practice. Negative or unexpected results were not to be
accompanied by extended discussions; the editors wrote,
“Long ‘alibis,’ unsupported by evidence or sound theory,
add nothing to the usefulness of the report” (p. 9). Also,
authors were encouraged to use good grammar and to avoid
jargon, as “some writing in psychology gives the impres-
sion that long words and obscure expressions are regarded
as evidence of scientific status” (pp. 25–26).
Through the following editions, the recommendations
became more detailed and specific. Of special note was the
Report of the Task Force on Statistical Inference (Wilkin-
son & the Task Force on Statistical Inference, 1999), which
presented guidelines for statistical reporting in APA jour-
nals that informed the content of the 4th edition of the
Publication Manual. Although the 5th edition of the Man-
ual does not contain a clearly delineated set of reporting
standards, this does not mean the Manual is devoid of
standards. Instead, recommendations, standards, and re-
quirements for reporting are embedded in various sections
of the text. Most notably, statements regarding the method
and results that should be included in a research report (as
well as how this information should be reported) appear in
the Manual’s description of the parts of a manuscript (pp.
10 –29). For example, when discussing who participated in
a study, the Manual states, “When humans participated as
the subjects of the study, report the procedures for selecting
and assigning them and the agreements and payments
made” (p. 18). With regard to the Results section, the
Manual states, “Mention all relevant results, including
those that run counter to the hypothesis” (p. 20), and it
provides descriptions of “sufficient statistics” (p. 23) that
need to be reported.
Thus, although reporting standards and requirements
are not highlighted in the most recent edition of the Man-
ual, they appear nonetheless. In that context, then, the
proposals offered by the JARS Group can be viewed not as
breaking new ground for psychological research but rather
as a systematization, clarification, and—to a lesser extent
than might at first appear—an expansion of standards that
already exist. The intended contribution of the current
effort, then, becomes as much one of increased emphasis as
increased content.
Drafting, Vetting, and Refinement of
the JARS
Next, the JARS Group canvassed the APA Council of
Editors to ascertain the degree to which the CONSORT and
TREND standards were already in use by APA journals
and to make us aware of other reporting standards. Also,
the JARS Group requested from the APA Publications
Office data it had on the use of auxiliary websites by
authors of APA journal articles. With this information in
hand, the JARS Group compared the CONSORT, TREND,
and AERA standards to one another and developed a com-
bined list of nonredundant elements contained in any or all
of the three sets of standards. The JARS Group then ex-
amined the combined list, rewrote some items for clarity
and ease of comprehension by an audience of psychologists
and other social and behavioral scientists, and added a few
suggestions of its own.
This combined list was then shared with the APA
Council of Editors, the APA Publication Manual Revision
Task Force, and the Publications and Communications
Board. These groups were requested to react to it. After
receiving these reactions and anonymous reactions from
reviewers chosen by the American Psychologist, the JARS
Group revised its report and arrived at the list of recom-
mendations contained in Tables 1, 2, and 3 and Figure 1.
The report was then approved again by the Publications and
Communications Board.
Information for Inclusion in
Manuscripts That Report New Data
Collections
The entries in Tables 1 through 3 and Figure 1 divide the
reporting standards into three parts. First, Table 1 presents
information recommended for inclusion in all reports sub-
mitted for publication in APA journals. Note that these
recommendations contain only a brief entry regarding the
type of research design. Along with these general stan-
dards, then, the JARS Group also recommended that spe-
cific standards be developed for different types of research
designs. Thus, Table 2 provides standards for research
designs involving experimental manipulations or evalua-
tions of interventions (Module A). Next, Table 3 provides
841December 2008 ● American Psychologist
Table 1
Journal Article Reporting Standards (JARS): Information Recommended for Inclusion in Manuscripts That Report
New Data Collections Regardless of Research Design
Paper section and topic Description
Title and title page Identify variables and theoretical issues under investigation and the relationship between them
Author note contains acknowledgment of special circumstances:
Use of data also appearing in previous publications, dissertations, or conference papers
Sources of funding or other support
Relationships that may be perceived as conflicts of interest
Abstract Problem under investigation
Participants or subjects; specifying pertinent characteristics; in animal research, include genus
and species
Study method, including:
Sample size
Any apparatus used
Outcome measures
Data-gathering procedures
Research design (e.g., experiment, observational study)
Findings, including effect sizes and confidence intervals and/or statistical significance levels
Conclusions and the implications or applications
Introduction The importance of the problem:
Theoretical or practical implications
Review of relevant scholarship:
Relation to previous work
If other aspects of this study have been reported on previously, how the current report differs
from these earlier reports
Specific hypotheses and objectives:
Theories or other means used to derive hypotheses
Primary and secondary hypotheses, other planned analyses
How hypotheses and research design relate to one another
Method
Participant characteristics Eligibility and exclusion criteria, including any restrictions based on demographic
characteristics
Major demographic characteristics as well as important topic-specific characteristics (e.g.,
achievement level in studies of educational interventions), or in the case of animal
research, genus and species
Sampling procedures Procedures for selecting participants, including:
The sampling method if a systematic sampling plan was implemented
Percentage of sample approached that participated
Self-selection (either by individuals or units, such as schools or clinics)
Settings and locations where data were collected
Agreements and payments made to participants
Institutional review board agreements, ethical standards met, safety monitoring
Sample size, power, and
precision
Intended sample size
Actual sample size, if different from intended sample size
How sample size was determined:
Power analysis, or methods used to determine precision of parameter estimates
Explanation of any interim analyses and stopping rules
Measures and covariates Definitions of all primary and secondary measures and covariates:
Include measures collected but not included in this report
Methods used to collect data
Methods used to enhance the quality of measurements:
Training and reliability of data collectors
Use of multiple observations
Information on validated or ad hoc instruments created for individual studies, for example,
psychometric and biometric properties
Research design Whether conditions were manipulated or naturally observed
Type of research design; provided in Table 3 are modules for:
Randomized experiments (Module A1)
Quasi-experiments (Module A2)
Other designs would have different reporting needs associated with them
842 December 2008 ● American Psychologist
standards for reporting either (a) a study involving random
assignment of participants to experimental or intervention
conditions (Module A1) or (b) quasi-experiments, in which
different groups of participants receive different experi-
mental manipulations or interventions but the groups are
formed (and perhaps equated) using a procedure other than
random assignment (Module A2). Using this modular ap-
proach, the JARS Group was able to incorporate the gen-
eral recommendations from the current APA Publication
Manual and both the CONSORT and TREND standards
into a single set of standards. This approach also makes it
possible for other research designs (e.g., observational
studies, longitudinal designs) to be added to the standards
by adding new modules.
The standards are categorized into the sections of a
research report used by APA journals. To illustrate how
the tables would be used, note that the Method section in
Table 1 is divided into subsections regarding participant
characteristics, sampling procedures, sample size, mea-
sures and covariates, and an overall categorization of the
research design. Then, if the design being described
involved an experimental manipulation or intervention,
Table 1 (continued)
Paper section and topic Description
Results
Participant flow Total number of participants
Flow of participants through each stage of the study
Recruitment Dates defining the periods of recruitment and repeated measurements or follow-up
Statistics and data
analysis
Information concerning problems with statistical assumptions and/or data distributions that
could affect the validity of findings
Missing data:
Frequency or percentages of missing data
Empirical evidence and/or theoretical arguments for the causes of data that are missing, for
example, missing completely at random (MCAR), missing at random (MAR), or missing
not at random (MNAR)
Methods for addressing missing data, if used
For each primary and secondary outcome and for each subgroup, a summary of:
Cases deleted from each analysis
Subgroup or cell sample sizes, cell means, standard deviations, or other estimates of
precision, and other descriptive statistics
Effect sizes and confidence intervals
For inferential statistics (null hypothesis significance testing), information about:
The a priori Type I error rate adopted
Direction, magnitude, degrees of freedom, and exact p level, even if no significant effect is
reported
For multivariable analytic systems (e.g., multivariate analyses of variance, regression analyses,
structural equation modeling analyses, and hierarchical linear modeling) also include the
associated variance–covariance (or correlation) matrix or matrices
Estimation problems (e.g., failure to converge, bad solution spaces), anomalous data points
Statistical software program, if specialized procedures were used
Report any other analyses performed, including adjusted analyses, indicating those that were
prespecified and those that were exploratory (though not necessarily in level of detail of
primary analyses)
Ancillary analyses Discussion of implications of ancillary analyses for statistical error rates
Discussion Statement of support or nonsupport for all original hypotheses:
Distinguished by primary and secondary hypotheses
Post hoc explanations
Similarities and differences between results and work of others
Interpretation of the results, taking into account:
Sources of potential bias and other threats to internal validity
Imprecision of measures
The overall number of tests or overlap among tests, and
Other limitations or weaknesses of the study
Generalizability (external validity) of the findings, taking into account:
The target population
Other contextual issues
Discussion of implications for future research, program, or policy
843December 2008 ● American Psychologist
Table 2
Module A: Reporting Standards for Studies With an Experimental Manipulation or Intervention (in Addition to
Material Presented in Table 1)
Paper section and topic Description
Method
Experimental
manipulations
or interventions
Details of the interventions or experimental manipulations intended for each study condition,
including control groups, and how and when manipulations or interventions were actually
administered, specifically including:
Content of the interventions or specific experimental manipulations
Summary or paraphrasing of instructions, unless they are unusual or compose the experimental
manipulation, in which case they may be presented verbatim
Method of intervention or manipulation delivery
Description of apparatus and materials used and their function in the experiment
Specialized equipment by model and supplier
Deliverer: who delivered the manipulations or interventions
Level of professional training
Level of training in specific interventions or manipulations
Number of deliverers and, in the case of interventions, the M, SD, and range of number of
individuals/units treated by each
Setting: where the manipulations or interventions occurred
Exposure quantity and duration: how many sessions, episodes, or events were intended to be
delivered, how long they were intended to last
Time span: how long it took to deliver the intervention or manipulation to each unit
Activities to increase compliance or adherence (e.g., incentives)
Use of language other than English and the translation method
Units of delivery
and analysis
Unit of delivery: How participants were grouped during delivery
Description of the smallest unit that was analyzed (and in the case of experiments, that was
randomly assigned to conditions) to assess manipulation or intervention effects (e.g., individuals,
work groups, classes)
If the unit of analysis differed from the unit of delivery, description of the analytical method used to
account for this (e.g., adjusting the standard error estimates by the design effect or using
multilevel analysis)
Results
Participant flow Total number of groups (if intervention was administered at the group level) and the number of
participants assigned to each group:
Number of participants who did not complete the experiment or crossed over to other conditions,
explain why
Number of participants used in primary analyses
Flow of participants through each stage of the study (see Figure 1)
Treatment fidelity Evidence on whether the treatment was delivered as intended
Baseline data Baseline demographic and clinical characteristics of each group
Statistics and data
analysis
Whether the analysis was by intent-to-treat, complier average causal effect, other or multiple ways
Adverse events
and side effects
All important adverse events or side effects in each intervention group
Discussion Discussion of results taking into account the mechanism by which the manipulation or intervention
was intended to work (causal pathways) or alternative mechanisms
If an intervention is involved, discussion of the success of and barriers to implementing the
intervention, fidelity of implementation
Generalizability (external validity) of the findings, taking into account:
The characteristics of the intervention
How, what outcomes were measured
Length of follow-up
Incentives
Compliance rates
The “clinical or practical significance” of outcomes and the basis for these interpretations
844 December 2008 ● American Psychologist
Table 2 presents additional information about the re-
search design that should be reported, including a de-
scription of the manipulation or intervention itself and
the units of delivery and analysis. Next, Table 3 presents
two separate sets of reporting standards to be used
depending on whether the participants in the study were
assigned to conditions using a random or nonrandom
procedure. Figure 1, an adaptation of the chart recom-
mended in the CONSORT guidelines, presents a chart
that should be used to present the flow of participants
through the stages of either an experiment or a quasi-
experiment. It details the amount and cause of partici-
pant attrition at each stage of the research.
In the future, new modules and flowcharts regarding
other research designs could be added to the standards to be
used in conjunction with Table 1. For example, tables could
be constructed to replace Table 2 for the reporting of
observational studies (e.g., studies with no manipulations
as part of the data collection), longitudinal studies, struc-
tural equation models, regression discontinuity designs,
single-case designs, or real-time data capture designs
(Stone & Shiffman, 2002), to name just a few.
Additional standards could be adopted for any of the
parts of a report. For example, the Evidence-Based Behav-
ioral Medicine Committee (Davidson et al., 2003) exam-
ined each of the 22 items on the CONSORT checklist and
described for each special considerations for reporting of
research on behavioral medicine interventions. Also, this
group proposed an additional 5 items, not included in the
CONSORT list, that they felt should be included in reports
Table 3
Reporting Standards for Studies Using Random and Nonrandom Assignment of Participants to Experimental
Groups
Paper section and topic Description
Module A1: Studies using random assignment
Method
Random assignment method Procedure used to generate the random assignment sequence, including details
of any restriction (e.g., blocking, stratification)
Random assignment concealment Whether sequence was concealed until interventions were assigned
Random assignment implementation Who generated the assignment sequence
Who enrolled participants
Who assigned participants to groups
Masking Whether participants, those administering the interventions, and those assessing
the outcomes were unaware of condition assignments
If masking took place, statement regarding how it was accomplished and how
the success of masking was evaluated
Statistical methods Statistical methods used to compare groups on primary outcome(s)
Statistical methods used for additional analyses, such as subgroup analyses and
adjusted analysis
Statistical methods used for mediation analyses
Module A2: Studies using nonrandom assignment
Method
Assignment method Unit of assignment (the unit being assigned to study conditions, e.g., individual,
group, community)
Method used to assign units to study conditions, including details of any
restriction (e.g., blocking, stratification, minimization)
Procedures employed to help minimize potential bias due to nonrandomization
(e.g., matching, propensity score matching)
Masking Whether participants, those administering the interventions, and those assessing
the outcomes were unaware of condition assignments
If masking took place, statement regarding how it was accomplished and how
the success of masking was evaluated
Statistical methods Statistical methods used to compare study groups on primary outcome(s),
including complex methods for correlated data
Statistical methods used for additional analyses, such as subgroup analyses and
adjusted analysis (e.g., methods for modeling pretest differences and
adjusting for them)
Statistical methods used for mediation analyses
845December 2008 ● American Psychologist
on behavioral medicine interventions: (a) training of treat-
ment providers, (b) supervision of treatment providers, (c)
patient and provider treatment allegiance, (d) manner of
testing and success of treatment delivery by the provider,
and (e) treatment adherence. The JARS Group encourages
other authoritative groups of interested researchers, practi-
tioners, and journal editorial teams to use Table 1 as a
similar starting point in their efforts, adding and deleting
items and modules to fit the information needs dictated by
research designs that are prominent in specific subdisci-
plines and topic areas. These revisions could then be in-
corporated into future iterations of the JARS.
Information for Inclusion in
Manuscripts That Report
Meta-Analyses
The same pressures that have led to proposals for reporting
standards for manuscripts that report new data collections
have led to similar efforts to establish standards for the
reporting of other types of research. Particular attention has
been focused on the reporting of meta-analyses.
With regard to reporting standards for meta-analysis,
the JARS Group began by contacting the members of the
Society for Research Synthesis Methodology and asking
Figure 1
Flow of Participants Through Each Stage of an Experiment or Quasi-Experiment
Assessed for eligibility (n = )
Excluded (total n = ) because
Did not meet inclusion criteria
(n = )
Refused to participate
(n = )
Other reasons
(n = )
Analyzed (n = )
Excluded from analysis (n = )
Give reasons
Lost to follow-up
(n = )
Give reasons
Discontinued participation
(n = )
Give reasons
Assigned to experimental group
(n = )
Received experimental manipulation
(n = )
Did not receive experimental
manipulation
(n = )
Give reasons
Lost to follow-up
(n = )
Give reasons
Discontinued participation
(n = )
Give reasons
Assigned to comparison group
(n = )
Received comparison manipulation (if
any)
(n = )
Did not receive comparison manipulation
(n = )
Give reasons
Analyzed (n = )
Excluded from analysis (n = )
Give reasons
Analysis
Follow-Up
Enrollment
Assignment
Note. This flowchart is an adaptation of the flowchart offered by the CONSORT Group (Altman et al., 2001; Moher, Schulz, & Altman, 2001). Journals publishing
the original CONSORT flowchart have waived copyright protection.
846 December 2008 ● American Psychologist
them to share with the group what they felt were the critical
aspects of meta-analysis conceptualization, methodology,
and results that need to be reported so that readers (and
manuscript reviewers) can make informed, critical judg-
ments about the appropriateness of the methods used for
the inferences drawn. This query led to the identification of
four other efforts to establish reporting standards for meta-
analysis. These included the QUOROM Statement (Quality
of Reporting of Meta-analysis; Moher et al., 1999) and its
revision, PRISMA (Preferred Reporting Items for System-
atic Reviews and Meta-Analyses; Moher, Liberati, Tet-
zlaff, Altman, & the PRISMA Group, 2008), MOOSE
(Meta-analysis of Observational Studies in Epidemiology;
Stroup et al., 2000), and the Potsdam Consultation on
Meta-Analysis (Cook, Sackett, & Spitzer, 1995).
Next the JARS Group compared the content of each of
the four sets of standards with the others and developed a
combined list of nonredundant elements contained in any
or all of them. The JARS Group then examined the com-
bined list, rewrote some items for clarity and ease of
comprehension by an audience of psychologists, and added
a few suggestions of its own. Then the resulting recom-
mendations were shared with a subgroup of members of the
Society for Research Synthesis Methodology who had ex-
perience writing and reviewing research syntheses in the
discipline of psychology. After these suggestions were
incorporated into the list, it was shared with members of
the Publications and Communications Board, who were
requested to react to it. After receiving these reactions, the
JARS Group arrived at the list of recommendations con-
tained in Table 4, titled Meta-Analysis Reporting Standards
(MARS). These were then approved by the Publications
and Communications Board.
Other Issues Related to Reporting
Standards
A Definition of “Reporting Standards”
The JARS Group recognized that there are three related
terms that need definition when one speaks about journal
article reporting standards: recommendations, standards,
and requirements. According to Merriam-Webster’s Online
Dictionary (n.d.), to recommend is “to present as worthy of
acceptance or trial . . . to endorse as fit, worthy, or compe-
tent.” In contrast, a standard is more specific and should
carry more influence: “something set up and established by
authority as a rule for the measure of quantity, weight,
extent, value, or quality.” And finally, a requirement goes
further still by dictating a course of action—“something
wanted or needed”—and to require is “to claim or ask for
by right and authority . . . to call for as suitable or appro-
priate . . . to demand as necessary or essential.”
With these definitions in mind, the JARS Group felt it
was providing recommendations regarding what informa-
tion should be reported in the write-up of a psychological
investigation and that these recommendations could also be
viewed as standards or at least as a beginning effort at
developing standards. The JARS Group felt this character-
ization was appropriate because the information it was
proposing for inclusion in reports was based on an integra-
tion of efforts by authoritative groups of researchers and
editors. However, the proposed standards are not offered as
requirements. The methods used in the subdisciplines of
psychology are so varied that the critical information
needed to assess the quality of research and to integrate it
successfully with other related studies varies considerably
from method to method in the context of the topic under
consideration. By not calling them “requirements,” the
JARS Group felt the standards would be given the weight
of authority while retaining for authors and editors the
flexibility to use the standards in the most efficacious
fashion (see below).
The Tension Between Complete Reporting
and Space Limitations
There is an innate tension between transparency in report-
ing and the space limitations imposed by the print medium.
As descriptions of research expand, so does the space
needed to report them. However, recent improvements in
the capacity of and access to electronic storage of infor-
mation suggest that this trade-off could someday disappear.
For example, the journals of the APA, among others, now
make available to authors auxiliary websites that can be
used to store supplemental materials associated with the
articles that appear in print. Similarly, it is possible for
electronic journals to contain short reports of research with
hot links to websites containing supplementary files.
The JARS Group recommends an increased use and
standardization of supplemental websites by APA journals
and authors. Some of the information contained in the
reporting standards might not appear in the published arti-
cle itself but rather in a supplemental website. For example,
if the instructions in an investigation are lengthy but critical
to understanding what was done, they may be presented
verbatim in a supplemental website. Supplemental materi-
als might include the flowchart of participants through the
study. It might include oversized tables of results (espe-
cially those associated with meta-analyses involving many
studies), audio or video clips, computer programs, and even
primary or supplementary data sets. Of course, all such
supplemental materials should be subject to peer review
and should be submitted with the initial manuscript. Editors
and reviewers can assist authors in determining what ma-
terial is supplemental and what needs to be presented in the
article proper.
Other Benefits of Reporting Standards
The general principle that guided the establishment of the
JARS for psychological research was the promotion of
sufficient and transparent descriptions of how a study was
conducted and what the researcher(s) found. Complete
reporting allows clearer determination of the strengths and
weaknesses of a study. This permits the users of the evi-
dence to judge more accurately the appropriate inferences
and applications derivable from research findings.
Related to quality assessments, it could be argued as
well that the existence of reporting standards will have a
salutary effect on the way research is conducted. For ex-
847December 2008 ● American Psychologist
Table 4
Meta-Analysis Reporting Standards (MARS): Information Recommended for Inclusion in Manuscripts Reporting
Meta-Analyses
Paper section and topic Description
Title Make it clear that the report describes a research synthesis and include “meta-analysis,” if
applicable
Footnote funding source(s)
Abstract The problem or relation(s) under investigation
Study eligibility criteria
Type(s) of participants included in primary studies
Meta-analysis methods (indicating whether a fixed or random model was used)
Main results (including the more important effect sizes and any important moderators of these
effect sizes)
Conclusions (including limitations)
Implications for theory, policy, and/or practice
Introduction Clear statement of the question or relation(s) under investigation:
Historical background
Theoretical, policy, and/or practical issues related to the question or relation(s) of interest
Rationale for the selection and coding of potential moderators and mediators of results
Types of study designs used in the primary research, their strengths and weaknesses
Types of predictor and outcome measures used, their psychometric characteristics
Populations to which the question or relation is relevant
Hypotheses, if any
Method
Inclusion and exclusion
criteria
Operational characteristics of independent (predictor) and dependent (outcome) variable(s)
Eligible participant populations
Eligible research design features (e.g., random assignment only, minimal sample size)
Time period in which studies needed to be conducted
Geographical and/or cultural restrictions
Moderator and mediator
analyses
Definition of all coding categories used to test moderators or mediators of the relation(s) of
interest
Search strategies Reference and citation databases searched
Registries (including prospective registries) searched:
Keywords used to enter databases and registries
Search software used and version
Time period in which studies needed to be conducted, if applicable
Other efforts to retrieve all available studies:
Listservs queried
Contacts made with authors (and how authors were chosen)
Reference lists of reports examined
Method of addressing reports in languages other than English
Process for determining study eligibility:
Aspects of reports were examined (i.e, title, abstract, and/or full text)
Number and qualifications of relevance judges
Indication of agreement
How disagreements were resolved
Treatment of unpublished studies
Coding procedures Number and qualifications of coders (e.g., level of expertise in the area, training)
Intercoder reliability or agreement
Whether each report was coded by more than one coder and if so, how disagreements were
resolved
Assessment of study quality:
If a quality scale was employed, a description of criteria and the procedures for application
If study design features were coded, what these were
How missing data were handled
Statistical methods Effect size metric(s):
Effect sizes calculating formulas (e.g., Ms and SDs, use of univariate F to r transform)
Corrections made to effect sizes (e.g., small sample bias, correction for unequal ns)
Effect size averaging and/or weighting method(s)
848 December 2008 ● American Psychologist
ample, by setting a standard that rates of loss of participants
should be reported (see Figure 1), researchers may begin
considering more concretely what acceptable levels of at-
trition are and may come to employ more effective proce-
dures meant to maximize the number of participants who
complete a study. Or standards that specify reporting a
confidence interval along with an effect size might moti-
vate researchers to plan their studies so as to ensure that the
confidence intervals surrounding point estimates will be
appropriately narrow.
Also, as noted above, reporting standards can improve
secondary use of data by making studies more useful for
meta-analysis. More broadly, if standards are similar across
disciplines, a consistency in reporting could promote inter-
disciplinary dialogue by making it clearer to researchers
how their efforts relate to one another.
And finally, reporting standards can make it easier
for other researchers to design and conduct replications
and related studies by providing more complete descrip-
tions of what has been done before. Without complete
reporting of the critical aspects of design and results, the
value of the next generation of research may be com-
promised.
Possible Disadvantages of Standards
It is important to point out that reporting standards also can
lead to excessive standardization with negative implica-
tions. For example, standardized reporting could fill articles
Table 4 (continued)
Paper section and topic Description
Statistical methods
(continued)
How effect size confidence intervals (or standard errors) were calculated
How effect size credibility intervals were calculated, if used
How studies with more than one effect size were handled
Whether fixed and/or random effects models were used and the model choice justification
How heterogeneity in effect sizes was assessed or estimated
Ms and SDs for measurement artifacts, if construct-level relationships were the focus
Tests and any adjustments for data censoring (e.g., publication bias, selective reporting)
Tests for statistical outliers
Statistical power of the meta-analysis
Statistical programs or software packages used to conduct statistical analyses
Results Number of citations examined for relevance
List of citations included in the synthesis
Number of citations relevant on many but not all inclusion criteria excluded from the meta-
analysis
Number of exclusions for each exclusion criterion (e.g., effect size could not be calculated),
with examples
Table giving descriptive information for each included study, including effect size and sample
size
Assessment of study quality, if any
Tables and/or graphic summaries:
Overall characteristics of the database (e.g., number of studies with different research
designs)
Overall effect size estimates, including measures of uncertainty (e.g., confidence and/or
credibility intervals)
Results of moderator and mediator analyses (analyses of subsets of studies):
Number of studies and total sample sizes for each moderator analysis
Assessment of interrelations among variables used for moderator and mediator analyses
Assessment of bias including possible data censoring
Discussion Statement of major findings
Consideration of alternative explanations for observed results:
Impact of data censoring
Generalizability of conclusions:
Relevant populations
Treatment variations
Dependent (outcome) variables
Research designs
General limitations (including assessment of the quality of studies included)
Implications and interpretation for theory, policy, or practice
Guidelines for future research
849December 2008 ● American Psychologist
with details of methods and results that are inconsequential
to interpretation. The critical facts about a study can get
lost in an excess of minutiae. Further, a forced consistency
can lead to ignoring important uniqueness. Reporting stan-
dards that appear comprehensive might lead researchers to
believe that “If it’s not asked for or does not conform to
criteria specified in the standards, it’s not necessary to
report.” In rare instances, then, the setting of reporting
standards might lead to the omission of information critical
to understanding what was done in a study and what was
found.
Also, as noted above, different methods are required
for studying different psychological phenomena. What
needs to be reported in order to evaluate the correspon-
dence between methods and inferences is highly dependent
on the research question and empirical approach. Infer-
ences about the effectiveness of psychotherapy, for exam-
ple, require attention to aspects of research design and
analysis that are different from those important for infer-
ences in the neuroscience of text processing. This context
dependency pertains not only to topic-specific consider-
ations but also to research designs. Thus, an experimental
study of the determinants of well-being analyzed via anal-
ysis of variance engenders different reporting needs than a
study on the same topic that employs a passive longitudinal
design and structural equation modeling. Indeed, the vari-
ations in substantive topics and research designs are facto-
rial in this regard. So experiments in psychotherapy and
neuroscience could share some reporting standards, even
though studies employing structural equation models in-
vestigating well-being would have little in common with
experiments in neuroscience.
Obstacles to Developing Standards
One obstacle to developing reporting standards encoun-
tered by the JARS Group was that differing taxonomies of
research approaches exist and different terms are used
within different subdisciplines to describe the same oper-
ational research variations. As simple examples, research-
ers in health psychology typically refer to studies that use
experimental manipulations of treatments conducted in nat-
uralistic settings as randomized clinical trials, whereas
similar designs are referred to as randomized field trials in
educational psychology. Some research areas refer to the
use of random assignment of participants, whereas others
use the term random allocation. Another example involves
the terms multilevel model, hierarchical linear model, and
mixed effects model, all of which are used to identify a
similar approach to data analysis. There have been, from
time to time, calls for standardized terminology to describe
commonly but inconsistently used scientific terms, such as
Kraemer et al.’s (1997) distinctions among words com-
monly used to denote risk. To address this problem, the
JARS Group attempted to use the simplest descriptions
possible and to avoid jargon and recommended that the
new Publication Manual include some explanatory text.
A second obstacle was that certain research topics and
methods will reveal different levels of consensus regarding
what is and is not important to report. Generally, the newer
and more complex the technique, the less agreement there
will be about reporting standards. For example, although
there are many benefits to reporting effect sizes, there are
certain situations (e.g., multilevel designs) where no clear
consensus exists on how best to conceptualize and/or cal-
culate effect size measures. In a related vein, reporting a
confidence interval with an effect size is sound advice, but
calculating confidence intervals for effect sizes is often
difficult given the current state of software. For this reason,
the JARS Group avoided developing reporting standards
for research designs about which a professional consensus
had not yet emerged. As consensus emerges, the JARS can
be expanded by adding modules.
Finally, the rapid pace of developments in methodol-
ogy dictates that any standards would have to be updated
frequently in order to retain currency. For example, the
state of the art for reporting various analytic techniques is
in a constant state of flux. Although some general princi-
ples (e.g., reporting the estimation procedure used in a
structural equation model) can incorporate new develop-
ments easily, other developments can involve fundamen-
tally new types of data for which standards must, by
necessity, evolve rapidly. Nascent and emerging areas,
such as functional neuroimaging and molecular genetics,
may require developers of standards to be on constant vigil
to ensure that new research areas are appropriately covered.
Questions for the Future
It has been mentioned several times that the setting of
standards for reporting of research in psychology involves
both general considerations and considerations specific to
separate subdisciplines. And, as the brief history of stan-
dards in the APA Publication Manual suggests, standards
evolve over time. The JARS Group expects refinements to
the contents of its tables. Further, in the spirit of evidence-
based decision making that is one impetus for the renewed
emphasis on reporting standards, we encourage the empir-
ical examination of the effects that standards have on
reporting practices. Not unlike the issues many psycholo-
gists study, the proposal and adoption of reporting stan-
dards is itself an intervention. It can be studied for its
effects on the contents of research reports and, most im-
portant, its impact on the uses of psychological research by
decision makers in various spheres of public and health
policy and by scholars seeking to understand the human
mind and behavior.
REFERENCES
Altman, D. G., Schulz, K. F., Moher, D., Egger, M., Davidoff, F.,
Elbourne, D., Gotzsche, P. C., & Lang, T. (2001). The revised CON-
SORT statement for reporting randomized trials: Explanation and elab-
oration. Annals of Internal Medicine, 134(8), 663– 694. Retrieved April
20, 2007, from http://www.consort-statement.org/
American Educational Research Association. (2006). Standards for re-
porting on empirical social science research in AERA publications.
Educational Researcher, 35(6), 33– 40.
American Psychological Association. (2001). Publication manual of the
American Psychological Association (5th ed.). Washington, DC: Au-
thor.
American Psychological Association, Council of Editors. (1952). Publi-
850 December 2008 ● American Psychologist
cation manual of the American Psychological Association. Psycholog-
ical Bulletin, 49(Suppl., Pt 2).
APA Presidential Task Force on Evidence-Based Practice. (2006). Evi-
dence-based practice in psychology. American Psychologist, 61, 271–
283.
Cook, D. J., Sackett, D. L., & Spitzer, W. O. (1995). Methodologic
guidelines for systematic reviews of randomized control trials in health
care from the Potsdam Consultation on Meta-Analysis. Journal of
Clinical Epidemiology, 48, 167–171.
Cooper, H. (2009). Research synthesis and meta-analysis: A step-by-step
approach (4th ed.). Thousand, Oaks, CA: Sage.
Cooper, H., Hedges, L. V., & Valentine, J. C. (Eds.). (2009). The hand-
book of research synthesis and meta-analysis (2nd ed.). New York:
Russell Sage Foundation.
Davidson, K. W., Goldstein, M., Kaplan, R. M., Kaufmann, P. G., Knat-
terud, G. L., Orleans, T. C., et al. (2003). Evidence-based behavioral
medicine: What is it and how do we achieve it? Annals of Behavioral
Medicine, 26, 161–171.
Des Jarlais, D. C., Lyles, C., Crepaz, N., & the TREND Group. (2004).
Improving the reporting quality of nonrandomized evaluations of
behavioral and public health interventions: The TREND statement.
American Journal of Public Health, 94, 361–366. Retrieved April 20,
2007, from http://www.trend-statement.org/asp/documents/statements/
AJPH_Mar2004_Trendstatement
International Committee of Medical Journal Editors. (2007). Uniform
requirements for manuscripts submitted to biomedical journals: Writ-
ing and editing for biomedical publication. Retrieved April 9, 2008,
from http://www.icmje.org/#clin_trials
Kraemer, H. C., Kazdin, A. E., Offord, D. R., Kessler, R. C., Jensen, P. S.,
& Kupfer, D. J. (1997). Coming to terms with the terms of risk.
Archives of General Psychiatry, 54, 337–343.
Merriam-Webster’s online dictionary. (n.d.). Retrieved April 20, 2007,
from http://www.m-w.com/dictionary/
Moher, D., Cook, D. J., Eastwood, S., Olkin, I., Rennie, D., Stroup, D., for
the QUOROM group. (1999). Improving the quality of reporting of
meta-analysis of randomized controlled trials: The QUOROM state-
ment. Lancet, 354, 1896 –1900.
Moher, D., Schulz, K. F., & Altman, D. G. (2001). The CONSORT
statement: Revised recommendations for improving the quality of re-
ports of parallel-group randomized trials. Annals of Internal Medicine,
134(8), 657– 662. Retrieved April 20, 2007 from http://www.consort-
statement.org
Moher, D., Liberati, A., Tetzlaff, J., Altman, D. G., & the PRISMA
Group. (2008). Preferred reporting items for systematic reviews and
meta-analysis: The PRISMA statement. Manuscript submitted for pub-
lication.
No Child Left Behind Act of 2001, Pub. L. 107–110, 115 Stat. 1425
(2002, January 8).
Sackett, D. L., Rosenberg, W. M. C., Muir Grey, J. A., Hayes, R. B., &
Richardson, W. S. (1996). Evidence based medicine: What it is and
what it isn’t. British Medical Journal, 312, 71–72.
Stone, A. A., & Shiffman, S. (2002). Capturing momentary, self-report
data: A proposal for reporting guidelines. Annals of Behavioral Medi-
cine, 24, 236 –243.
Stroup, D. F., Berlin, J. A., Morton, S. C., Olkin, I., Williamson, G. D.,
Rennie, D., et al. (2000). Meta-analysis of observational studies in
epidemiology. Journal of the American Medical Association, 283,
2008 –2012.
Wilkinson, L., & the Task Force on Statistical Inference. (1999). Statis-
tical methods in psychology journals: Guidelines and explanations.
American Psychologist, 54, 594 – 604.
851December 2008 ● American Psychologist
UNIVERSIDAD INTERAMERICANA DE PUERTO RICO
RECINTO METROPOLITANO
DEPARTAMENTO DE EDUCACIÓN Y PROFESIONES DE LA CONDUCTA
PROGRAMA DE PSICOLOGÍA
PROF. NOÉ GARCÍA
PSYCH 4971
Instructions for the first (1st) essay
I. General Instructions
Read the instructions carefully so that you can properly write the first assigned essay of the first
unit. The due date for this essay is APRIL/16/2019. The text should be handed in, printed and
stapled, following a Microsoft Word format, a letter size twelve (12) and a spacing of one and a half
(1.5) between sentences (or line spacing in the paragraph options). The extension of the work should
not be less than three (3) pages or greater than five (5) pages of content (not title page or cover of
any kind is required).
At the end of your essay you will list the References used (for more information see the text I
sent regarding the instructions for writing an essay for this class). Plagiarism will not be tolerated.
The written essay should be a product that reflects your understanding and interpretation (substantiated
with arguments, evidence and reasoning, expressed in a clear, consistent, and accurate matter) of the
assigned readings and discussions in class. Do not use Internet sources and limit yourself to the
assigned texts, the discussion in class and your own argumentative capabilities.
The test has a total value of twenty-five (25) points.
II. Specific Instructions
The following is a brief recap (recapitulation) on our discussion during the first unit, which
should set the premises and main arguments to discuss in your essay. In this first unit we have
discussed the proposals put forward by Evidence Based Practice (EBP), and its derivatives, alongside
the APA’s take and proposals for its implementations in Psychology. The APA, specially the Division
12 (the Clinical Psychology division), have argued for the need for a more systematic and transparent
application of the “best evidence” when it comes to evaluating which are the “best treatments”
(psychotherapies). In many ways this corresponds with a certain ideal regarding how the scientific
community applies and informs the results from empirical research. Many psychologists feel that there
are still some limits and even limitations to this proposal regarding EBP. We have seen that even those
psychologists that defend the need for “better evidence”, argue that: 1) there is bias regarding the
publication of studies and the use of certain methodologies, selections processes and analysis (valued
as better suited to be regarded as “evidence”); 2) there is a gap between efficacy, effectiveness and cost-
effectiveness; 3) similarly there is a gap between clinical practice and research; and 4) a lot of
misconceptions and resistance regarding what EBP actually entails. Taking all of this into
consideration, elaborate an essay in which you:
Discuss the limits (“strenghts” and “weaknesses” if you will) of EBP’s application to Psychology,
taking into consideration some of the problems outlined, and the possibilities of rethinking how
Psychology can account for its processes, outcomes and validity. In other words, discuss how can
we rethink the ways in which we view and analyze evidence in Psychology.
When you elaborate your essay also take into consideration the following:
1) You need to use at least two (2) of the assigned and discussed(suggested texts don’t count in
this) texts in class. Be precise when presenting an author’s (or a group of author’s) critiques. In
this case it is recommended to at least pick one (1) text among the discussions done in: the
APA Presidental Task Force on Evidence-Based Practice (2006), Tolin et al (2015), Lilienfeld at
al (2013), Ferguson & Heene (2012) or Wampold et al (2016).
2) Once you have discuss the author’s critiques and arguments, it is also relevant that you present
your own critiques and thought on the matter, as cogent and elaborated as possible (beyond a
simple appreciative opinion, always justify your answers).
III. References
Assigned
Garg, A.X., Hackam, D., & Tonelli, M.(2008). Systematic Review and Meta-analysis: When one
study is just not enough. Clinical Journal American Society Nephrology: 253-260.
Sánchez-Meca, J. & Marín-Martínez, F. (2010). Meta-analysis in Psychological Research.
International Journal of Psychological Research 3(1): 150-162.
APA Publications & Communications Board (2008). Reporting Standards for Research in Psychology:
Why do we need them? What might they be? American Psychologists 63 (9): 839-851.
APA Presidental Task Force on Evidence-Based Practice (2006). Evidence-Based Practice in
Psychology. American Psychologists 61 (4): 271-285.
Tolin, D.F., Mckay, D., Forman, E.M., Klonsky, E.D., & Thombs, B.D. (2015). Empirically Supported
Treatment: Recommendations for a New Model. Clinical Psychology Science Practice: 1-22.
Lilienfeld, S.O, Ritschel, L.A., Lynn, S.J., Cautin, R.L. & Latzman, R.D. (2013). Why many
clinical psychologists are resistant to evidence-based practice: Root causes and
constructive remedies. Clinical Psychology Review 33: 883-900
Ferguson, C. & Heene, M. (2012). A Vast Graveyard of Undead Theories: Publication Bias and
Psychological Science’s Aversion to the Null. Perspectives on Psychological Science 7(6):555-
561.
Schtulman, A. (2013). Epistemic similarities between student’s scientific and supernatural
beliefs. Journal of Educational Psychology 105 (1); 199-212.
Wampold, B.E., et al (2016). In pursuit of truth: A critical examination of meta-analyses of
cognitive behavior therapy. Psychotherapy Research 27(1): 14-32
Leichsernring, F. & Rabung, S. (2011). Long-term psychodynamic psychotherapy in complex mental
disorders: update of a meta-analysis. British Journal Psychiatry 199:15-22
We provide professional writing services to help you score straight A’s by submitting custom written assignments that mirror your guidelines.
Get result-oriented writing and never worry about grades anymore. We follow the highest quality standards to make sure that you get perfect assignments.
Our writers have experience in dealing with papers of every educational level. You can surely rely on the expertise of our qualified professionals.
Your deadline is our threshold for success and we take it very seriously. We make sure you receive your papers before your predefined time.
Someone from our customer support team is always here to respond to your questions. So, hit us up if you have got any ambiguity or concern.
Sit back and relax while we help you out with writing your papers. We have an ultimate policy for keeping your personal and order-related details a secret.
We assure you that your document will be thoroughly checked for plagiarism and grammatical errors as we use highly authentic and licit sources.
Still reluctant about placing an order? Our 100% Moneyback Guarantee backs you up on rare occasions where you aren’t satisfied with the writing.
You don’t have to wait for an update for hours; you can track the progress of your order any time you want. We share the status after each step.
Although you can leverage our expertise for any writing task, we have a knack for creating flawless papers for the following document types.
Although you can leverage our expertise for any writing task, we have a knack for creating flawless papers for the following document types.
From brainstorming your paper's outline to perfecting its grammar, we perform every step carefully to make your paper worthy of A grade.
Hire your preferred writer anytime. Simply specify if you want your preferred expert to write your paper and we’ll make that happen.
Get an elaborate and authentic grammar check report with your work to have the grammar goodness sealed in your document.
You can purchase this feature if you want our writers to sum up your paper in the form of a concise and well-articulated summary.
You don’t have to worry about plagiarism anymore. Get a plagiarism report to certify the uniqueness of your work.
Join us for the best experience while seeking writing assistance in your college life. A good grade is all you need to boost up your academic excellence and we are all about it.
We create perfect papers according to the guidelines.
We seamlessly edit out errors from your papers.
We thoroughly read your final draft to identify errors.
Work with ultimate peace of mind because we ensure that your academic work is our responsibility and your grades are a top concern for us!
Dedication. Quality. Commitment. Punctuality
Here is what we have achieved so far. These numbers are evidence that we go the extra mile to make your college journey successful.
We have the most intuitive and minimalistic process so that you can easily place an order. Just follow a few steps to unlock success.
We understand your guidelines first before delivering any writing service. You can discuss your writing needs and we will have them evaluated by our dedicated team.
We write your papers in a standardized way. We complete your work in such a way that it turns out to be a perfect description of your guidelines.
We promise you excellent grades and academic excellence that you always longed for. Our writers stay in touch with you via email.