SOCW 6311 WK 3 ASSIGNMENT: Assignment: Assignment: Creating a Single-System (Subject) Design Study

  

SOCW 6311 WK 3 ASSIGNMENT: Assignment: Assignment: Creating a Single-System (Subject) Design Study

Don't use plagiarized sources. Get Your Custom Essay on
SOCW 6311 WK 3 ASSIGNMENT: Assignment: Assignment: Creating a Single-System (Subject) Design Study
Just from $13/Page
Order Essay

Be sure to include APA in text citations and references 7th addition I have included the rubric and my last essay with the instructors feedback on how to make it better. It must have a introduction and conclusion also

The steps at the heart of single-system (subject) research are part of the everyday practice of social work. Each day social workers implement interventions to meet clients’ needs and monitor results. However, conducting proper single-system (subject) research entails far more than these simple day-to-day practices. Proper single-system research requires a high degree of knowledge and commitment. Social workers must fully understand the purpose of single-system (subject) research and the variations of single-system (subject) design. They must develop a hypothesis based upon research and select the right design for testing it. They must ensure the reliability and validity of the data to be collected and know how to properly analyze and evaluate that data. This assignment asks you to rise to the challenge of creating a proposal for a single-subject research study.

To prepare for this Assignment, imagine that you are the social worker assigned to work with Paula Cortez case study link at bottom of this page  (see the case study, “Social Work Research: Single Subject” in this week’s resources). After an initial assessment of her social, medical, and psychiatric problems, you develop a plan for intervention. You also develop a plan to monitor progress in your work with her using measures that can be evaluated in a single-system research design. As a scholar practitioner, you rely on research to help plan your intervention and your evaluation plan.

Complete the Cortez Family interactive media in this week’s resources. Conduct a literature search related to the chronic issues related to HIV/AIDS and bipolar mental disorder. Search for additional research related to assessing outcomes and theoretical frameworks appropriate for this client. For example, your search could include terms such as motivational interviewing and outcomes and goal-oriented practice and outcomes. You might also look at the NREPP database identified in Week 1, to search for interventions related to mental health and physical health.

Submit a 5- to 7-page proposal/research plan for single-system (subject) evaluation for your work with Paula Cortez. Identify the problems that you will target and the outcomes you will measure, select an appropriate intervention or interventions (including length of time), and identify an appropriate evaluation plan.

Include a description of:

The problem(s) that are the focus of treatment

The intervention approach, including length of time, so that it can be replicated

A summary of the literature that you reviewed that led you to select this intervention approach

The purpose for conducting a single-system (subject) research evaluation

The measures for evaluating the outcomes and observing change including:

Evidence from your literature search about the nature of the measures

The validity and reliability of the measures

How baseline measures will be obtained

How often follow-up measures will be administered

The criteria that you would use to determine whether the intervention is effective

How the periodic measurements could assist you in your ongoing work with Paula

Resources 

Laureate Education (producer). (2013b) Cortez family [Interactive media] retrieved from

http://mym.cdn.laureate-media.com/2dett4d/Walden/SOCW/6311/CH/mm/case_study/index.html

 

1

Accessing Information about Evidence-Based Practices

Summerlove Holcomb

SOCW 6311

Instructor: Kathleen Schoenecker

03/04/2021

Kathleen Schoenecker
The title page is coming along, but there are a few adjustments needed. Did you see the fully structured and formatted APA template posted to the Doc Share area of our course to help you with this? Check the left navigation bar in our course for Doc Sharing. Alternatively, you can use the generic, graduate-level 7th ed. APA templates posted to the Walden Writing Center.
For help finding the Walden 7th edition, graduate-level, APA templates, please check:
https://academicguides.waldenu.edu/writingcenter/templates/general

Kathleen Schoenecker

Kathleen Schoenecker

2

Accessing Information About Evidence-Based Practices

Research questions on the 12 step program

How effective is the program?

Is it an evidence-based program?

How much does the program cost?

Summaries of the two interventions and their respective research regarding the effectiveness

The first intervention uses cognitive behavioural therapy (CBT), which has been effective

according to the research presented by Hofmann et al. (2012). In the study, CBT showed a higher

response rate and matched the problems experienced by the patients. Likewise, CBT was widely

applied to several problems, from substance abuse to mental health issues. The therapy is also

applicable for low-income persons, and it has been deemed to have no to minimal side effects

throughout the patient’s recovery process. Enough research is also provided in the article to back up

the intervention.

The second intervention is the 12 step facilitation program applied by social workers in

helping patients with substance use disorders. The program is very affordable and is also known as

a low-cost community program. There are many active involvements in the 12 step program in the

research conducted by Kelly et al. (2013).

Recommendations for Tiffani’s social worker that address the following:

Factors to consider when choosing between the two interventions

The factors to be considered will include the cost-effectiveness of the program. A good

program should not cost much because the treatment may be prolonged, thereby inconveniencing

the patient. Another factor to be considered is the program’s quality for which it should be

evidence-based to show its effectiveness in quality performed studies. Another important factor is

Kathleen Schoenecker

Kathleen Schoenecker

Kathleen Schoenecker

Kathleen Schoenecker

Kathleen Schoenecker

Kathleen Schoenecker
Is there data to support the efficacy of the approach for your client?

Kathleen Schoenecker

Kathleen Schoenecker

Kathleen Schoenecker

Kathleen Schoenecker

Kathleen Schoenecker

Kathleen Schoenecker

Kathleen Schoenecker

Kathleen Schoenecker

Kathleen Schoenecker

Kathleen Schoenecker

Kathleen Schoenecker
It would be helpful to include specific data to show the rate of success.

Kathleen Schoenecker

Kathleen Schoenecker

Kathleen Schoenecker

Kathleen Schoenecker

Kathleen Schoenecker

3

that the program should match the patient’s condition to avoid irrelevancy and unnecessary costs

since there will be no positive outcomes. Lastly, one should consider the organizational set of

resources offered by the program.

The social work skills that the staff would require to implement the intervention

Some of the social workers’ skills include decision-making and analytical skills to enable

them to assess the interventions for the clients. Likewise, the social worker should have good

communication skills that will be needed to educate and convince the patients on the best program

to peruse with (Substance Abuse and Mental Health Services Administration, 2018). Substance

abuse and management skills are also essential. They should also have research-based skills to

source and analyze the programs’ effectiveness based on the Evidence provided in the research.

The training required to implement each intervention

The training required to implement the CBT is the psychology training to study the

patients’ behaviors and reaction towards stimuli (Small et al., 2007). On the other hand, the training

needed for the 12 step program includes therapy and clinical trials and tests.

An evaluation of evidence-based practice based on your reaction to the experience, in which

you address the following questions:

Would you, as a beginning researcher, have enough knowledge to benefit from researching

evidence-based practices? Why or why not?

Yes, I have many benefits to gain from the Evidence-based researchers to help me develop

future credible research that can be backed up with substantial Evidence. Another benefit is getting

enough information to present my future research to agencies for approval (National Association of

Social workers, n.d.). Likewise, I have learned that Evidence-based research can improve the

Kathleen Schoenecker

Kathleen Schoenecker

Kathleen Schoenecker

Kathleen Schoenecker

Kathleen Schoenecker

Kathleen Schoenecker
Check the requirements further.

Kathleen Schoenecker

Kathleen Schoenecker
Well said.

Kathleen Schoenecker

Kathleen Schoenecker

Kathleen Schoenecker

Kathleen Schoenecker

Kathleen Schoenecker

Kathleen Schoenecker

Kathleen Schoenecker

Kathleen Schoenecker

Kathleen Schoenecker

Kathleen Schoenecker

Kathleen Schoenecker

Kathleen Schoenecker

Kathleen Schoenecker
Elaboration would be appropriate.

Kathleen Schoenecker

Kathleen Schoenecker

Kathleen Schoenecker

Kathleen Schoenecker

Kathleen Schoenecker

Kathleen Schoenecker

Kathleen Schoenecker

Kathleen Schoenecker

4

public’s trust and confidence in given interventions. There are also the cost benefits of Evidence-

based programs.

How might the research that you conducted increase your confidence in the intervention with

Tiffani?

The research conducted has enabled me to effectively handle tiffanies situations by using

Evidence-based interventions (The California Evidence-based Clearinghouse for Child Welfare,

2018). The interventions will work effectively because they have been tested and positive results

seen in different scenarios that apply to her substance abuse conditions.

Is the information provided enough to decide on interventions? Why or why not?

Yes, the information provided is enough for decision making on tiffanies conditions

because there is enough Evidence to support the interventions. As the social worker had stated, the

twelve-step model is also an Evidence-based model. Therefore, patients only need to be educated

and informed of the models well before use.

Kathleen Schoenecker

Kathleen Schoenecker

Kathleen Schoenecker

Kathleen Schoenecker

Kathleen Schoenecker

Kathleen Schoenecker

Kathleen Schoenecker

Kathleen Schoenecker

Kathleen Schoenecker

Kathleen Schoenecker
Elaborate. What cost-benefits?

Kathleen Schoenecker

5

References

Hofmann, S. G., Asnaani, A., Vonk, I. J., Sawyer, A. T., & Fang, A. (2012). The efficacy of

cognitive behavioral therapy: A review of meta-analyses. Cognitive therapy and

research, 36(5), 427-440.

Kelly, J. F., Stout, R. L., & Slaymaker, V. (2013). Emerging adults’ treatment outcomes in relation

to 12-step mutual-help attendance and active involvement. Drug and alcohol

dependence, 129(1-2), 151-157.

National Association of Social workers.(n.d.). Evidence-based practice. Retrieved

from https://www.socialworkers.org/News/Research-Data/Social-Work-Policy-Research/

Evidence-Based-Practice

Small, S., A, cooney, S., M, Eastman, G and o’connor., C, (2007). Guidelines for selecting an

evidence‐based program: Balancing community needs, program quality, and

organizational resources

Substance Abuse and Mental Health Services Administration. (2018). Evidence-based practice

resource center. https://www.samhsa.gov/resource-search/ebp

The California Evidence-based Clearinghouse for Child Welfare. (2018). Program registry.

Retrieved from http://www. https://www.cebc4cw.org/home/

Kathleen Schoenecker
bold.

Kathleen Schoenecker

Kathleen Schoenecker

Kathleen Schoenecker

Kathleen Schoenecker

Kathleen Schoenecker

Kathleen Schoenecker

Kathleen Schoenecker

Kathleen Schoenecker

https://www.cebc4cw.org/home/

http://www/

https://www.samhsa.gov/resource-search/ebp

https://www.socialworkers.org/News/Research-Data/Social-Work-Policy-Research/Evidence-Based-Practice

https://www.socialworkers.org/News/Research-Data/Social-Work-Policy-Research/Evidence-Based-Practice

Locating Assessment
Instruments

Kevin Corcoran and Nikki 1-lozack

T
his c hapter nddr es.es how to locate inst r ume nts for oocial wor~ reseJr.:h
a nd p ract ice. T h is task m ay not seem too challeng ing, but it is. Locating
in,trument\ includes being familiar with a number of sou rces of men
surement in.)trumems and knowing what it is one \\·ants to measure or
observe in t he first place.

To locate 3Jl instrum ent, the researcher mu’t know 1• hat he or she intends to mea
sure. This includes a well-defined construct or conceptual domain of study. fbe mea-
sureme nt too l is t he operation a.lizntion o f the v.tria b le, and it is impossible to locate an
•ppropria te mea1u rement u nless t h e resea rcher is certJin w hat is to be measu red.
Knowing what to obsen·e includes precise definitions of the independent and depen
dent vari,tbles. I nstruments often are .t>sociated with operationalizing the dependent
variables (e.g., m nl”i t al discord in a single-syste m design o f a couple in counseling, din-
teal depr~s;,ion in a , instruments chiefly ascertain the ob by some relevant Other, such as a spouse or case manager.
lly design, instruments intend to sy~tematically quan tify som e affeo, co gnit io n, o r con
duct in some e nvironment or setting and provide numerical estimates of affect, cogni ·
tion) or conduct.

Inst ruments also are useful in opc1·a tio nalizing independent va riables. I n experimc n
tal design>, this is considered a o1anipulation check. The reason for using a measurement
of the independent ,·ariables, as tl1e phr..se suggests, is to determine “hether the manip-
u latio n o f th e in dependen t 1•ar iablc was su c(.essful. Po t· exa mple, assu me th at the
rcsearchc t· is conduuiog a study comparing in -home counseling services to ca;e ma n
agement services. The researcher “ould want to be reassured rhat t h e coumcling group
was actually getting “cou nseli n g” fro m the cou nselor and that the case rn.Ulagemcnt
g roup wa> not getting some fo rm o f cou nseling fro m t he case mao.1gers. W ith o u t the
fonner,the researcher would not be certain that the counseling groups actually had •uf-
licient exposu re to tru ly be considered under the treatment condition of counseling. By
measuring t he independent va r iab le, the resca,·che r ca 11 also dete rmine whe th c,· expo
sure to some form of therapeu t ic relat ionship wit h the ca•c manager contam111ated the
com parison group. fo conduct a manipulation check hke this, the researcher mit\ht
decide to administe r the Workin g i\llhm ce Inventory (l torvath & Green berg, 1989),
which ascertains th ree elements of t h erapeutic relationsh ips: go•l orientn tion. t ask

65

66 Pt.n1′ I • Q uMml’t. n ·; c API’RUA<: uts: Fo u :.JOAtiONS Of Ot.rA Com CliON

directedness, and bo nding. T he resea rchers would expect that th e r esearch participants
in I he experimental cond ition “~Nould have stro nger indicators o f a the rapeutic relation ·
ship and th at those in the control group would not.

l n su m mar y, the challenge of locating measures includes determ in ing what
weU ·definecl co nstruct o r concept of either the independent or dependen t variable is to be
observed. Once that is deter mined, t he challenge is to marshal through a number of mea·
sures as to fin d appropriate ones that are reliable and valid. T his chapter p.-ovides a
number of resources to locate instruments but does not promise to enable the reader to
do a complete search for all c;, is ting instr uments. T hat is becomi ng increasingly difficu lt
wi t h the development of mo re instrum ents and new outlets of availability (e.g.,
t he Internet). T he scope of the resources in th is chapter, howe,•er, is sufficiently broad
to tocate an adequate number of instrumen t~ filr research ” ” ” practice, and in alllikeli·
hood, the social wo r k researcher will find many appropriate instruments and not too few.

Sources for Locating Instruments

Th~re are a number of sources of instr uments. T his chap ter considers fo ur major sources:
profess io nal journals, boo ks, commercial pu blishing houses speciali2ing ill marketing
measurement tools, and t he Internet. ‘

Professiona l Journals
Instruments are of little value unless they are psychometrically ~ound (i.e., r eliable and
valid) . Because the development of a good instrument itself involves research to estimate
reliability and validi ty, professional journals often are t he fi rst o utlets for new instru·
me:nts. Jo urnals are also o ne of t he fi rst outkk~ for norma tive da ta on more established
instruments. Because of the rapid change in the knowledge base of the behavioral and
soc ial sciences, jo urnals probably

Many scholarly journals are excellent sources of inst ruments. Some focus chietlr on
me.,surements (e.g., journal of Personr~litr ;\ssessmenl, Psrchologicaliissessrnent). Other
journal~ might p ubl ish instruments that are relevant to the pr ofessio nal o r scholarly djs-
cip[ine of the readersh ip (e.g., Research on Social Work Pmctice, Family Process). ·n,ble 5. 1
contains a number of scholarly and professional journals useful in locating new instrn·
ments and published normative data. Most journals can be found in a good uni,•ersi ty
libr ary. In addition, most are also available via t he Inter net. lt should be noted that while
som e joLu nals have their own Web sites, o thers c.an be accessed from multi ple sites tJ1at
wi li allow access to t he articles. ,vtan)’ of the URLs in 1bble 5. 1 are such links, and it sho uld
be noted t hat if the address no longer wo r ks, typ ing the journal name into an Internet
sear ch engine should suffice to bri ng up many other so urces.

Boo ks
f n addition to the journals, numero us reference books available describe instr umen ts .. and
about a dozen actually reprint the instruments. Reference books fo r inst-ruments review
measurement tools and provide citations for further information on locating the actual
measurement tools. Three widely nOted examples are the MentfJilvleasuremellts Yearbook
( Conoler & KnHner. ! 989, 1995), Tem in Prir11: (Mitchell & Buros Institute, 1983), “nd

TAou 5.1 Selected Journals Frequently Publishing New Measurement Tools

Journal

Amencan Joumol of P~ychiorry

Applied Behoworo/ A~tO$uremenL

Jooma/ of 1’5ychopolhology ond
Behu~o/A$s~tnt

8€Mvior Thuopy

fducationol and Psyt.hologlcOI
Measurement

Evaluouon tn Fom11y l’loctiU

Fomily Procts5

HISpomc Journal of 8titOIIoraf Sc’ences

Jovrnal of 8ehuvintal SCiences und

P

Jou111ol of Block PsychoiiXJy

Joumol of Clinico/ Psychology

Joumol of CottWillttg ond ClmHJII

P>ychology

Joumol of NeMNJS and Muotol Drsease

Joumal of Petsonolo’ry Assusment

Mwsuremeul ond Cvoluut10n m
CQJJnscling and Development

Psychologrcal Asst~ment

Resoorrh in Social WO

Journal Web Sile

ajp.psyr.hlittryonline.org

No Web site Jvailable; only hard cop!es ‘” lht
univet\ity library

www.\ptir’lqerlmk.com/content/ 10S340

wvNt.aabt.org/mentalhealth/joJmatsl?fa:;,olJ

\WI\V.5C18t~C.cd•tctt.tom/science/journai/OOOS7967

epm.sagcpub.com

No Web Site avdllable; only hard -co pi~ iu the
ur-iver\•ty 1brary

WWW lamliyPIOCOSS.OflJ

nJb~gcpub com

www springedink.com/coment/0882· 268q

jtm.sag~pub.com

v.wwl intersc ence “‘i!ey.corn/jo•JrnaJ/31171/ttr.me

www.apa.org/JournaiS/ccp

WNYi jonmd tom

www.persona~~ty org/jpa.hL’1’1t

www.couns(l’ling.org /P\sbhca tions/ JournalS w~px

wvM.apa. o 1 g/ JOU rna is/ pas

tSw.ugepub.com

\W•’W Ra\”ATI~\..Ofo/Ptibhcanons/iOt,..rnaiYreo,catth/sw
r ~tro hll”

T~ts (Key

A number of book~ reference and actually •·cprint the instruments. Son1e are relevant
to topics of social work practice (Schune & MaloutT, 1995), whereas others arc more •·ele-
va nt to r<'searc.h (Robin>On & Shaver, 1973). A couple of books are more specific to cer-
tain populations (e.g., families (McCubbin, .\

(e.g., str~ !Cohen, Kcs;ler, & Gordon, 1995) and anxiety (Antony, On.illo, Roemer, &
Association for Ad,·ancement of 6cha,•ior Therapy, 2001)). Ahogether, there are more
than l 00 reference books for instruments; a good university lib•·<~ry may be needed. Thblc 5.2 lists several books for instnuncnLS, all published since 1980.

TABLE 5.2 Selected Books

Boola That Reprin l and Refere.nct

Measurement Too l s

Caulel• ( 1977, 1981)

f&heo •nd to

Hudson ( 1982. 1992)

McCubl.~~r1 and Thorn pwn (1991)

\1o:Cbm Thornpo;on. and M’Cubt>n ( 1996)

•.icDo¥.<1. •nd Newell (19<)6)

Robiur.on and Shaveo (1913)

Schutte nnd Malooff ( 1995)

Boo k s Th a i Desc ri be a nd
Refeu:nce Me.asures

A:ken (1996)

AnastaSI (191!8)

Bellack .1nd Hersen (1988<1 1988b)

B!Od”‘y •’d ~·mtt.ero an ( 1983)

wrio, Brown, E. l(orMal<, ond ~ewm•n (1986)

Gonoleyllnd Kramer ( t9S9, 19$5)

Daoa (1993)

Fredman .1nd Sherman (1987)

Crotevant

Hariog•.>n (t986)

Hc1′!1an (1983)

Hube1 and lle.atth Outcou (:) ln~titutc ( 19 <)11)

K”‘:enb.l um and Wollam’ ( 1988)

~ ;u>d ~nd (‘9’JO)

Kumpler. Shur, Ross. Bunnell. l 1brert, and Mrll waod
( 1992)

,•.~cOo…: and t>ev.<:t' ( t987. 1996)

‘l.cl\eynolds (1981)

MllLhell (1985)

Milt hell and Buros ln

Ol1•nd Keatu1ge ( 1998)

Ptrlrnutter, Strao~ ,md Touhatos ( 1990)

S.Jwin. HarnqJn, ctnd Woog ( 199!.1)

SouU.v.O

s.,,~·tland and Kt)”'” (1991)

lhompson (1989)

Van Rrelen and Sc911 ( 1988)

Wevlcr (1989)

Boo k s That Dist u

S.•low (1981)

OuiChcr (2002)

Chn5

Coldman, Stem. and Cueny (1983)

Htr.,..rr and Ollcndrtt (19n)

J”ob and Tenneubaurn (191!8)

>:a..,pl\aus and ReynOlds (1990)

Lann, Rutter. and Turroa (1988)

Lauflcr ( 1982)

“‘·””and Tenia! ( 1988)

Me1’c<, Ponteoono, and Surot.i (1996)

Mcrluni, Glass, and C.onest (198 1)

Oql..,, Lambert. .,><1 Maste

l’tcola ( 1995)

Sedcrer and Dickey (1996)

SttefnM and Norm an ( 1989)

WOOdy ( 1980)

( HAI’ H R 5 • l QC:.t.nN(. ASSES$).1(1\IT h iSTftUl>IENtS 69

Commercial Publishing Houses
T he researcher may locate instruments from commercial publishing houses that special-
ize in marketing measurement tools. T his outlet for instruments bas a number of advan-
tages, iJJcluding security from the liabilit y o f co pyr ight infri ngements, access to
established instruments, and relative nor mative data that might be available only from the
stream of commerce. Examples of this last point indude the Bed< Depression Inventory, wh ich is available through Psychological Corporation, and Hudson's ( 1992) po pular clini- cal measurement package, which is available from WALMYR P ublishing (see 1\tble 5.3). Most of t he instruments marketecl b)' publishi ng houus are available at a rea•onable fee. Others instruments are available at no cost, such as t he widely used Physical and Mental Health Summary Scales, also known as the SF-36 and the SF-12 ( Ware, Kosinski, & Keller, 1994a, 1994b). T hese instr uments are available through the Medical Outcomes Trust (see Table 5.3).

Table 5.3 lists a variety of publishing houses pro,~ding instruments. It is far from a
complete list given that there arc nearly 1,000 publishing houses marketing assessment
tools, not to mention a large number of presses that publish on ly a few special ty instru-
men ts. One of the most thorough lists is found i n Conoley and Kramer (1995). When
available, we have included the URL to facilitate your search.

The Internet
An other valuable source for locating instrumerLIS is the Internet. T his remarkable
source is truly a fountain for info rm ation worldwide and provides acce~s to actual
measurements from commerc iai Web sites, not-for-profit sites, research centers, pub-
tidy tr aded companies, and even individual authors wth of in for mation
available. T his r ate of change often mea ns that as Web sites come, so may they go.
Unlike a lib rary, t he infor mation retrieved might not contin ue to be available to others
n eeding i t in the futllre. It is also importa nt to note that t he l nternet may allow access
or availability to measurements that yo u need a professio nal license and t he proper
training tO adm inister and may requir e permission to use. As stated by the Emo ry
Universit y Library ( 2006), ” l n order to obtain and adm inister p ublished psycho logical
tests (sometimes called commercial tests), one must be a licensed professional or in
some cases, a graduate student.”

Although there are l iteral!)’ thousands of\.Veb sites practical for locating these instr u-
men ts, the most useful are those that weave together a number of sires. These arc not sim-
ply “hot links” t hat are designed to pro,•ide access to other relevant sites but Web sites
designed as partnerships around var ious sources of information on instr uments. O ne
extremely useful example is ERlC/ AE Test Locator (http:l/ericac.netltestcol.htm) . Test
IA>catot is a joint project of the ERIC Clearinghouse of Assessment and Evaluation of the
Catholic University of America, the Ed ucational Testing Service, the Huros Institute of
Mental Measurement of the University of Nebraska, George Washington University, and
t he test p ublisher Pro -&1. Each of t hese sponsors provides access to and reviews of instru-
ments. For example, the Educational Testing Ser vice page reviews more t han 25,000 mea-
surement tools. The combined sponsorship of Bllros and Pro-Ed provides citations of
publications using educational and psychological instnnn ents, as well as access to th ree
val uable refe rence books cited ear lier: lvfental Mwsuremem s Yearbook, Tests i11 J’r i11t, and

70 P~o11r 1 • QoPa TATP/f APPROACHES: Fou»OATIOtiS. Of Ot.T:. Co lUC11 0to~

TABLE 5.3 List of Selected Publishers’ Marketing Measurement Tools

• Academic Th’.e,r.11r/ Pab!lv;t.ic)n~, JO (.()rn:rneaial Boulevard. Navato, Cf:.. 9•1947; -..v .. •.r\•;.f,lr.adr:micthtirfJf)’~·:corn

• Achenbach, l’h(.;mas M .. Depar~me nt of PS}1Chiatry, University orVerrnont, I S .. ?rc!oper.t S:reet,
R!J!Ii,JQI.Ort. VT 05,101·3444

• llmerican Guidance SeMCB,. 420 WcOOiand Road, P.O. Sox 99, Cirde Pil-1l’S. ···lN ssmo::;; ‘A\\’\’}1\QS.:’ICt.OOJ))
• ~sedates t01 P.esea;dl in ~eh.ci•liOI In<..., The Science Ceorer. 34th and l•:larket, f'h.ladl';ir,h ·a. PA 1Ql 04

• 4·iorn etrics ~esearch. New Yolk State Psychiatric Institute, 77~ 16″.AC1) $1 f€-?.l~ Room 54 1. No:.w York, NY
10032; ·Nww.Vfi)tC.pitted’u/ research/biQrfletJk:s/ind ex.htm

• Ca1ifumia Test 8ure .. 1u, 20 Ryan Ranch Road, Mo:ntf}rey, CA 93~10: ww~·:.db.com

• C.er1te~ fo1 F.pi~lerrllolog_ic ShJdies, Department. of He.a!ril and Huma.rl Sctvicr:s, 56′()0 fiSIJel$ Ul :-re.
Roc..<\

~ Con\u ~rrn P5ydo!o;Ji.sts Pre». lr;t,, S77 College-Ave, P.O. Box 11636, ~loAi to.C.4. 9·1306; w~·Nl;(flt-·.OOrn

• Ed t.cii!ienal .ana lndusuiaHesting Sc:l\~ces, P.O. “Box 77311 , San ()i~g6, (.r\ 92 10.7;”.\’t•;w.edits.net

• Institute fnr i’crsonal’ty and Ability· T¢”St.irtO, hiC •• P.O. BOx 188. 1062 CC.rbnodb’ Elti·:e, Cf,ampa1gr;, IL
6~820; www.ipat.com

• Mo:f c.al OtHOOrti(!S frus~ 20 Park Plaza, Suite 1014, Boston, fi.~\02n6·431J; … .-… … t.’.oui.(.OmS!$-tn:slor£

• M~ltl·hea:lth Systems, Inc., 908 (‘.liagara ralls Bou le•:ard, North’Tonav:i-!ndJ-, NYrl4120; ‘iNPN.n:.hs.::um

• Peat:;.or\ ASS-Q$sme;nts {formalilj NCS 1-\S.Sessments). 5605 Green CirdeC;i\_.’1). ~0. Ot’:}: H 6, t•:linreap .is.
MN 55440; ‘IAWJ.peJ tsont:ssessm;,:nts.com

• “hJJS:ng Re::.er)r<,h: .4S$c)d ate\ 3'!52 Cummins Street f.au Claire, WI 54701

• Pers~n·O·.’

•. Pro·Ed, 5:700 Shoa~ Creek Boute·tard, Austin, ‘JX 78JS7; w•Nw.proedinc..oom

• P_sy::·1ologiul A})es5rnent f!esotm.es,lnc.. P.O. Box 998, Odt.ss,l, rl 3 l 5515; \Wl.’<.'.pcu illC.Ct:·m

• l>!.)’tho,ogicat Corporatkm, 555 Ar..ademic Cou1t, Sun .4nlonio, IX 78204: w.-..,,•.harco~u1assessinem.com

• Psv.:bclogicat Publical.ious~ Inc,, 290 Conejo Ri dge Road. Suite 100, T’nousa;)d Oak$; (.A 913-{)1-:

‘”)Wt.tjta:

• Psyc.hdogical Ser.·i(:es. lrJC .. 3400 \·Vilshire Boulevard,. Suite- 1200, tos .t.,:11}~} P.S, GA 900-10;
•.’JWW.psionlinc.com

• Reseacr.h .Press~ P.O Box 917760, Champaign, ll6l820; vtww.reseaf\.hp:r~s.s.t

• )RlVMcGrzwl (for;ral~· Sci@~ Rc~t<.h Associates, tr·IC.). 155 No(th Wack~r Dr:ve, Chicag.:;., II. 606{)6~ w•.vw.sraor: line.oom

• Scott. f.oresm~1n & Company, Test Division. 1900 ta-St Lake /\venue, Glen·.1ew, IL60025

• Sigma Assessme;:~:- Sys.:-ems .. ~ic., P.O. Bcx 610984~ PoJt Hlm)n, Ml ti $.1)1)’1·09B4:
.• , .• ,,w.stgmf!a$scs-;rrt~nts:;~terns..wrn

•· $1\A Product Croup. londord’lo”!.ls.!’. 9701 We~.t Hi9g im> Fcoad, Rosemont, ll 60018

• U.S. Departrnr:r;.c of Ode~$e:, Testing Directorate, Headquarters, M:titaw Et~!istmc:m: P({.IC~~$kig
C.ol”milr.u. Mtention MEPCT. fortShe1idan, ll600.17

• U.S. Oepa1tmcnt o~ Lnbc1, “Division ofTes~ing l:mployment and Training Admini~,tration! V•ft1.1hi JitJC~1.

DC 20213

• WAU . .f’r’n Puf)Jish:r·g Company, P.O. Box 122171 Tallahassee, fl.}2317·22l7; ‘lN.”/l.\’.•aiJUJ·r.r..:.l(tl

• We:~ie.Jn Ps.ycholcgical SePJices, 12031 Wtlsllirc BOLI~vard, I()S Angeles., CA 9<..'025: 'NNo\•.wrspublis!tr:om • 'NonCe~l :c Personnel Tcsr, Inc., l50~ N. Milwaukee A\•enue, l.Jberty-AIIe,n 60040•1380; . '

Cu APT£11 5 • Loc … nNG Asuss~1E t1 T lusTRUl.1Eurs 71

Tests. T he site includes t he names and addresses of nearly I ,000 commercial publishers of
instruments. T he scope of this Web site is broad and includes inform ation on qualitative
research and measures, professional standards, and much more. It is an excellent i nitial
step in locating instruments on the Internet.

Search engines are a useful and easy way to search for measurements or in formation on
measurements. A few different types o[ searches will help the researcher ~nd the instru-
ments needed . By simply typing a name of a known measurement. many sites ‘~ill be
identified. For example, typing “Beck Depression I nventory” (Beck, 1972) in Coogle
brings up many Web sites referencing t he measurement. By exploring t he links, much
information can be fo und regard.ing uses of the instrument, normative data, and where
the researcher might be able to purchase or access t he measurement. If t he type of mea-
surement is undecided, a search by general topics, such as “psycholog ical measurements,”
\~ill bring up many sites that provide lists of measmements, such as v.•ww.psychology
.orgllinks/Environment_Behavior_Relationships/Measuremen tl. A search t hat is more
specific. such as ··depression measurements.,” may narrow the specLr um of attides t V(“O
furt her, although it should be noted that the resulting sites may take time to filter
through. \’.~lile some of the links may be usable, many articles \viii o nly cite the instr u-
menl.$ instead o f providing t he information the researcher is seeking.

It may be more helpful to look up the me_asurement somce on the Internet, such as the
author or publisher of the instr umen ts, as well as t he book or journal in wh ich it is

‘ located . Although all source searches can as.~ist the researcher in loca ting measurements,
the most efficient route takes us back to where our pursuit began, that is, to professional
and scholarly journals. Wit h the advent of electronic information , many online journals
and articles are available to you as students at your university or college that are not nor-
mally offered to the public. T his may include reviews of usefnl Web sites and publication
of important Web site locations for special topics. One excellent example of this type of
Web sute citation is found in Psychiatric Sen •ices (http:/lpsychservices.psychiatr)’Online.orgl),
which routinely publishes the Internet locations of a wide range of mental health info r-
mation, including instruments.

Conclusion

This c hapter has attempted to show the reader how to locate instruments. What might
have seemed like a simple task, it was shown, may actually be quite ditficult. There are a
number o f sources of instruments to help with this challenge. T his chapter considered
four •najor ones: professional journals, books, commercial publishing houses, and the
Internet. Each offers access to a wide range of measurement tools for multitudes of vari-
ables of study in social work research and p ractice.

T he resources presented to help locate instruments arc far from complete; nonetheless,
the outcome of a search us i11g the provided resources is likely to produce more choices
t han expected, rather than too few for a relevant instrument. This is due to the rapid cre-
ation of new instr uments, their use in an expanding number of social work research and
practice settings, and t he need for accountability by profession<~ ls . lu the future, it is likely t hat even more and be!ler measurement tools will become available. Beca use old instru- ments do not fade away (e.g., Beck Depression Inventor)' [Beck, 1972]) and new o nes emerge, t he search for instruments will become increasingly challenging. It is hoped that t he resources presented in this chapter will help the reader navigate through this infor- mation and locate instruments for social work research and practice.

72 1’1AR I I • O VA«TITA’II Vf AP PII OACII(S: F’OU II OAHONS oF D AlA Cou .Ecnou

References

Aiken. L. R. (1996). Rat;ng smles dtJd checklists: Evaluating behavior, per.roualitj~ amf anitur./e$. New
York: John Wiley .

.Amstasi, A. ( 1988). P;ycl!ologicalllming (3rd ed.). New York: Macmillan.
Antony. ~L M .. Orsillo, S. rvt.. Roemer, L. & Associalion for t\dvanccmt~nt of Behavior Therapy.

(2001). Practitioner’s guide to empirically btl$td measures of anxiety. New York: Klu\oJ’et
AcademidPienum.

ll~rlow, D. H. (Ed.). (1981). PsydJOlogical testing(6th ed.). New York: l’>o\acrnill

Press.
BcUack, A. S., & Hersen, M. ( 1988ai. Behavioral assessment: A pmctical handbook (3rd eel.). New

York: Pergamon.
Beii:H:k, A. S .• & Hersen, M. {198Sb). DictiotJary of behavioral asses.imem: rechrtiques. New York:

Perg3mon.
Brodsky. S. L., & Smitherman, H. 0 . ( 1983). Handbook ()!scales for research i~1 crilne rmd delin –

quency. New York: Plenum.
ilutcber, ). N. (2002). C/i,ical persomrlity assessmem: Practical nppmachet (2nd ed.). New York

Oxfo rd University Press.
Cautela) ]. R. (1977). Bf~havior tmalysi~ forms for clinicr1l intervention Champaign~ IL: Research

Press. •
Cautda, ). R. ( 1981). Behavior analysis forms for clinical interver:tion (Vol. 2). Champaign, IL:

Research Press.
Cht·istensen, 10. R., DeJulio,$. $., & Lo rn bert, M. ). ( 1983 ). The asswment of psycltorl~erapy outcome.

New Yo rk: John Wiley.
Ciarlo~ J. A. ~ Brown, T. R.> E.d •A•a rd’i, D. \V., Ki~·es

l~o?altlt treatment outcome measurement techniques {DHHS P·ub. No. [ADM] 86- 1301).
Rockville, lvlD: U.S. Department of Health a nd Human Services.

Cohen. S.> Kessler. R. c .> & GOI’don, L. u . ( 1995). MCtt$llrittg stress: A guide for henlr/J and socinl sci·
entis1s. New York: Oxford Univcrsit’f Pr(‘Ss.

Coooley, ). C., & Kramer,). J. ( 1989). T/;e lOth menwl measurements yearb

Co no ley, ). C., & Kramer, ). J. ( 1995). ‘fne 12th menta/measurements yearbook. Lincoln, NE: flu ros
Institute of Mental Measurement.

Dana, R. H. ( 1993}. Multicultuml nssessmettt perspecfi,-es for professional psycltologJ< Jloston: AUyn & Bacon. Emory Uni\•ersi1y Library. (2006). Educational and psydJOlogicaJ nu.>n.suremettt instruments. ReLrieved

December 9, 2007, l’rom http://wcb.library.emory.edu/subjectsi$0CSCiJps)’cllOIIedupS)~esiS.html
J::’isclu.·r, f., & Corcoran, K. (2007a). A-feasures for clinical practice ilmf rt:sett rch: A sourcebook: Vol. 1.

Couples, families, tltld childrerz (4lh ed.). New York: Oxford Universil)’ Press.
Fischer, )’., & Corco•·an, K. (2007b). Meas1.m!S for clinical practice and research: A sourcebook: Vol. 2.

Adults (4th ed.). New Yo rk: Oxfo rd University Press.
Fredman, N.> & Sherman. R. (1987). Handbook of measurements for marriage and famil)’ therapy.

New York: Bru•me,·/.h-1:nel.
Goldman,)., Stein, C. L., & Guerry, S. ( 1983). Psychological methods of clinical a..

Pergamon.
Grotevant.li. D .. &. Carlson, C. L (1989). Fnmily assessment: A guide to metltods mrd measures. New

York: Guilford.
Harrington , R. G. (1986). 1’e$ting adolescents: A reference guide for rompreltemive psychological

assessments. Kansas City, j\vfO: Test Corporation of America.
Hersen, M., & Ollendick, T. H. ( 1993). Handbook J!f child at~d tJdolescem nssessmenc. Boston: Allyn

& !lacon.
Holmon,A. ~l ( 1983). Frunily tJsse

CH,.I’tllt S • loc.m-.. c. Ass.cn t.wn h~$HIJtHNU 73

Hon•ath, A. 0 •• & Gretnberg, L S. (1989). ~vdopruent and •-alidarion of the \~rking Alliance
Inventory. Joumnl QfCounseling Psychology,~ 223-233.

Huber, M., & H

Hudson,\·\’. \•V. ( 1982}. T f1e clinical measuumem pal’kttge: A field mr~mtal. Homewood, IL: Dorsey.
Hudson, ~V. W. ( 1992). WA.LMYR fJSSe>S•11e111 smle.r scoring “‘”‘””‘L Tempc, AZ: WAUVIVR.
Jacob, T., & Te nnenbaum, D. L. (1988). Family assessmt•nt: Ratiouale. methods, and fwure d;tectiQ11S.

New York: Plenum.
Kamphaus, R. W., & lk)•nolds, C. It ( 1990 ). Handboo~ of pwcloo/ogicn! a11d e,d,.mlior.al assesrmmt

o(drildrcn. :\<"• York: Guilford. Keste~baum, C. J., & Williall)$, D. T. ( 1988). Handbook of clinical a~ssment of clultlrtn and ado/es-

cems. ~C\’i York: NC\~ York Uni\-ersity Press.
Kc)’S

ogy, ed.,caflo<:, u11d business (3rd ed.). Austin, TX: Pro- Ed. Kumpfer, K. L., Shur, G. H., Ross, /. G., Bunnell, K. K., Librelt,]. j., & Mill

NfedS·Itfi’ mcnu in pr.;.ve11tiou: A marmal on ~eler.lhtg and rJsiug instrr.ttHet:b tf> ew1lHate prevention
progrnr11s (CSAP Technic<~ Report No. 8). llockvillc, M D: U.S. Dep"rtmcnc

L-ann, l. S., Rutter. M .. & Tuma,A. H. ( 1988). Assessmrll! aud diagnosis in c/!i!d psrrhopwlw!IJgy. New
York: Guilford.

U.uffer, A. ( 1982).1\sswmmt tools for prrutiriout N, 111anagero, a11d trainers. IX’V

lems (2nd

and aritlptrlriOtl. Madison: Universitr of \’\’i~con~in Press.
McCubbin, I I. 1 .. & ·rhompson, A. I. (Eds.). ( 199 1 ). Panuly ass

pracllc(‘. Madison: Unh·ersity of \.Visconsin Press.
~·kDowell. 1., & Newell, C. ( 1987) . . :\1eas11riug Jwalth: J\ guide to rating sm!e~ aud tJfU!Stionuaircs.

New York: Oxfi)l’d University Press.
McDoweJJ, l , & NewcU, C. { 1996). Measun’ng!Joolth: A guide ro m ting ‘!-CAles and quesri&m14tirN {2,nd ed.).

New York: Oxford University Press.
;\fcR”)’110lds. P. (f:

Bass .
.\ferlun:i, T.V., Class, C. R., & Genest, M. (Eds.). ( 1981). Cognitive a.~mer:t. Ne>> York: Guilford .
.\Idler. P. 1 .. Ponterotto, J, G .. & Suzuki, L. A. ( 1996). Ham/book of ,m,Jticu/mral ‘”~”mtnl: Clillicttl,

p~ychotogitlll, ami ctlttmtionnl applicariom. San Francisco: Jossey-6t’lSS.
i\’Htchell, f. V. (Ed.). ( J 985 }. 1 he 9th mental mea.wrcmrnt ymrbook. Lincoln: Unive1·.,i 1 y of Nebraska

Press .
• Vlltchell, 1. V,, & Ouros Institute of Mental tvleasurcme nts. (J 983 ). Tests i11 prinr UT: An imlrx to ttsfS1

test revie ws. omJ lh l’ literature o:n spl’cifir tcsN. Lincoln: .Buros Lns1i1u 1e of !vtr ntal
:\·feasllfelncnts. linive-csity of Nebrask:•-Lin~,.oln.

Ogles, B. M., l..arnberl, M. j .. & Masters, K. S. (1996). Asswing outcome in dinim! plttrtiu. Boston:
All~11 & Bacon.

Olin. J. T., & I. N<"> York: Aldine de Gru)’tcr.
Pe.rhnutter. B. £· •• Su.ms. M.A., & Toulialos, J. ( L990). lltlmlbook of fitmily m~asur.:,m·mzcclmicJUC$.

Net>-bury P.ork, CA: Sage.
Robinson) f. P., & ShJvc r, P.R. (1973). lvfensure$ of social psrclrological attin.rles. 1\t 111 1\rbor, MJ:

Sun •ey Resenrch Ccnter, lmtitule for Socinl Rco;eru ch.
Sawin. K. J,, f·Inrrignn . M.P., & Woo~ P. ( L 995}. M tttSW’tS offawily funcliQniuKfM rcJNtrcfl cwtl prac-

:;ce. New York: Spl’iuger .
.Schutte, N. S., & ) ·talouff’, I. M. ( 1995). j\feas1m’~ of family fwwioniugfor researd: ar:d pn1rtirc. i\e-\\’

York: Springer.

74 PAII T J • Q UANtriATIVt A PPROACH ES: f OUIID:.TION$ Of O.n. ( ()W ;CTIO!i

S.cdcrcr, L. 1., & Dickey~ B. (Eds. ). ( I 996). Ow comes as.~ej$tJumt in clinical practice . .Baltimore:
\ ViUiams & Wilkins.

Simpson, D. D., & McBride, -~ – t\. (1992). Family, Friends, and Self (FFS} assessment scale for
f>.·tcxican-American youth. H;spmric joumal of Behavioral Sdenccst 14, 327-340.

South\•lorth. L. E., Burr, R. L.> & Cox> A. E. ( J 980). Screening ami evaluating the young child: A IJmul-
book of instrw11e-11l$. W use from infancy U> six years. Springfield, l l : Charles C Thomas.

Sh·einer> D. L.> & Norman, G. R ( 1989). lleaJth mensurement scales: A practical guide to tiu?tt devel-
opment anti use. Oxford. UK: Oxford University Press.

Swt,.,tland, R. C., & Keyser, D. J. (1991 }. Tesrs: A comprehensive reference (3rd ed.}. Austin, TX: Pro-Ed.
Thompson, C. ( Ed .}. ( 1989}. The instntmeutsof psychiatric researclt. New York: john Wiley.
Va n Riezen, L-1., & Segal, M. (1988) . Compflmtive evalumion of rating scales for clittical psychophar-

macology. New York: Elsevier.
Ware, ). E., Jr., Kosin. $Core the SF-12 physical and

mental health swnmary scales. Boston: Medical Outcomes Trust.
Ware, J. E .. Jr., Kosinski, M., & Keller, S. D. ( i994b). SF·36 pltysical and mcneal hcaltlt Swllmflry

scales: User’s nuruual. Bosron: Medical OUicomes Trust.
VVetzler, S. (Ed .). ( 1989) . Jvfeasurirtg menwl illuess: Psychometric nssessment for clinicians.

\Vashi••gton, DC: American Psychiatrk Assod alion.
Woody, R. H. (Ed.). (1980). Encyclopedia of clinical assessment (Vols. 1-2}. Son f mncisco: Jossey-Bass.

ERIC/AE Test l ocator- http:/ /ericae.netltestcol.htm
This Web site references more than 10,000 instruments. While this is a useful Web site for exploring and
identifying appropriate mstrumenls. a simple Coogle search of the instJument:s name and/or author
typically results in a large vatiety of publications on the insrroment. The Google seatch also tends to
produce a list of places where you may acquire the .nstrumen~ or it may provide an actual copy of it

1. When is it advisable to use an exisnng publ ish~ rnstrument to assess some construct, as opposed
to inventing your own?

2. Go to the journal Reseat

3. Pick an area of social work pracnce of rnterest to you (e.g .. domestic violence, chrld welfare, mental
health, substance abuse. poverty) Using the resources described rn thrs chapter, try and find one
measure that you could potentially use. See how easy (or difficult} it is to locate an acwal copy of
the instrument and 1nformat10H on how to score it

Single-System
Studies

Mark A. Mattaini

ocial work practice at all system levels involves action leading to behav-
ioral or cul tural change. The primary role of social work research is to
provide knowledge that contributes to such professional action. vVhile
descriptive resea rch about human and cultural conditions, as discussed
elsc·where in this volume, can be valuable fo r guiding professional acti on,

know ing how to most effectively support change is critical for practice. A central qu estion
for social work research, therefore, is “what works” in practice, what works to address
what goals and issues, with what populations, under wha t contextual conditions. While
descriptive research ca11 suggest hypotheses, the only way to really determ in e howweU any
fo rm of practice works is to test it, under the most rigorous conditions possible.

Experimen tal research is therefore criti cal for advancin g social work practice.
Unfon·tunately, only a small proportion of soc ial work research is experimental (Thyer,
200 I). Experiment al research is of two types, group experiments (e.g., randomized clinical
trials [RCTs]) and single-system research (SS R, also commonly referred to as single case
resealfch, N of 1 research, o r interrupted time-series experiments). Si ngle-system experi-
mental research, however, has often been un deremphasi zed in social work, in part because
of limited understanding of the logic of natural science among social scientists and wcial
workers.

SSR is experimental research; its purpose, as noted by Horner and colleagues (2005 ), is
“to document causal, or functional, relationships between independent and dependent
variables” (p. 166). The methodology has been used with all system levels-micro, mezzo,
and macro-m aking it wid ely appl iCable for studying social work concerns. For example,
Moore, Delaney, and Dixon (2007) studied ways to enha nce quality of life for quite
impaired patients with Alzheimer’s disease using singl e-system methods and were able to
both individualize interven tions and produce generalizable knowledge from their study
in ways that perhaps no other research strategy could equa l. In another example, Serna,
Schumaker, Sherman, and Sheldon (1991) worked to improve family interactions in
families with preteen and teenage children. The first several interven tions they attempted
(interventions that are common in social work practice) fa iled to produce changes that
generalized to homes. Single-system procedures, however, allowed them to rigorously and
sequentially test multiple approaches until an adequately powerful intervention strategy
was refi ned. (Note that Lhis would be impossible using group methods without under-
mining the rigor of t he study.)

241

242 PART II • QUANTITATIV( APPROACHES: TYPES OF S TUDIES

Turning to larger systems, single-system designs can be used, for example, to examine
the relative effects of different sets of organizational and community contex.’tS on the
effectiveness of school violence p reven tion effo rts (Ma ttaini, 2006). Fu rthermore, Jason,
Braciszewski, O lson, and Ferrari (2005) used multiple baseline single-system methods to
test the impact of policy changes on the rate of opening mutual help recovery homes for
substance abusers across entire states. Embry and colleagues (2007) used a similar design
to test the impact of a statewide intervention to redu ce sales of tobacco to m in ors.

Although sin gle-system m e thods are widely used for practice monitoring in social
work, research and monitoring are different endeavors with different purposes. This
chapter focuses on the utility of SSR for knowledge building. Readers interested in 1 he use
of single-system methods for p ractice mon itoring are likely to find Bloom, Fischer, and
Orme (2006) and Nugent, Siep pert, and Hudso n (2001 ) particularly helpful.

Understanding Single-System Research

Single-system experimental research relies on natural science methodologies, while much
of the rest of social work research, including a good deal of group experimental research,
emphasizes social science methods. The differences are real and s ubstantive. In 1993,
Johnston and Pennypacker noted,

The natural sciences have spawned technologies that have dramatica lly transformed
the h u man culture, a nd the pace o f t echno logical development o nly seems to
increase. The social sciences have yet to offer a single well-develop ed techno logy
that has had a broad impact on daily life. (p. 6)

T here is li llie evidence that this s ituation has changed. The reasons involve bo th meth-
ous and philosophies of science. Critically, however, analysis is central in most natural sci-
ences and is best achieved through the direct manipulation of variables and observation
of the impact of those manipulations over a period of time. As one expert noted, the heart
or SSR is demonstrating influence b y “mak[ing] things go up an d down” under precisely
specified conditions (J. Moore, personal communication, 1998) . Such analysis is often
best done one case at a time.

SSR has particular strengths for social work research. SSR focuses on the individual sys-
tem, the indiv id ual person, the individ ual family, and the individual neighborhood, typi-
cally the level of analysis of primary interest in social work. Fur thermore, SSR allows
detailed analysis of intervention outcomes for both responders and nonresponders, which
is critical for practice because each client, not just the average client, must be of concern.
Relevant variables can then be further manipulated to understand and assist those who
have not responded to the in itial manipulations (Horner ct al., 2005). furthermore, as
noted by Horner and colleagues (2005 ), rigorous SSR can be implemented in natural and
near natural conditions, making it a practical strategy for elaborating and refining inter-
ventions with immediate appl icabili ly in standard service setti ngs.

Contrasts With Group Experimental Research
Most group exper imen tal research reli es on comparin g the impact of one or more inter-
ventio ns (e.g., experimental treatment vs. standard care, placebo therapy. or no treat-
ment) applied to more or less equivalent samples. Ideally, these samples are randomly

CHAPTER 14 • SJNGLE·SYSTfM RESEARCH 243

selected from a larger population of interest, but in social work research, it is more
common for samples to be chosen on the basis of availability or convenience.
Comparison studies include (a) classical experiments with randomization and no –
intervention controls, (b) contrast studies that compare one intervention with another,
and (c) a wide range of quasi-exper imental des igns. W”hilc comparison studies, espe-
cially ra ndomized clinical trials, are often regarded as the gold standard for ex-perimen-
tal research, the often unacknowledged strategic and tactical limits of !>uch comparison
studies are serious (Johnston & Pennypacker, 1993, p. 119). Conclusions rely on proba
bilistic meth ods drawn from the social sciences, rather than on the analytic methods of
SSR. As a result, Jo hnston and Pennypacker ( 1993) suggest that comparison studies
“often lead to inappropri~te inferences with poor ge nera li ty, based on improper
evidence gath ered in support of the wrong question, thus wasting the field’s limited
experimental resources” (p. 120). (Similar criticisms have been made of much descrip-
tive research.)

Wh ile comparison stu dies are useful for many purposes (as outlined elsewhere in this
volume), it is important to understand their limits. As is true of mo st social science
research, comparison studies at their core are actuarial. They attempt to determine which
of two procedures produces better results on average (Johnston & Pennypacker, 1993 ). Jn
pretty much all cases, however, some persons (or groups, organizations, or communi ties )
will do bcller, some will show m inimal change, an d others wil l do worse. Comparison
studies by their nature do not provide in formation about the variables that may explain
why these within -group differences occur; rather, such differences, while acknowledged,
are generally trea ted as error. Analytic natural science methods, however, including rigor-
ous SSR, can do so.

In addition,

although two procedures may address the same general behavioral goal, a number
of detailed differences among them may often make each an inappropriate metric
for the other. These differences may in clude (a) the exact characteristics of the
populations and settings where ea ch works best, (b) the target behaviors and their
controlling infl uences, or (c) a variety of more administrative considerations such
as the characteristics of the personnel conducting each procedure. (Johnston &
Pennypacker, 1993, p. 122)

Similar issues are present for large system work like that done in community practice
and prevention science. Biglan, Ary, and Wagenaar (2000) note a n umber of limita tion!> lo
the use of comparison studies in community research, including “(a) the high cost of
research d ue to the number of communities needed in such studies, (b) the difficulty in
developing generalizable theoretical principles about community change proccs. e
through randomized trials, (c) the obscuring of relationships that are unique to a subset
of communities, and (d) the problem of diffusion of intervention activities from in ter-
vention to control communities” (p. 32) . SSR, particularly the use of sop histicated time-
series designs with matched communities (Biglan et al., 2000; Coulton, 2005), provides
powerful alternatives that do not suffer from these limitations.

Analytic investigations, in co ntrast to actuarial studies, allow the researcher to manip-
ulate identified variables one at a time, oflen with one system at a time, to explore the
impact of those variables and the d ifferences in such impacts across systems, as well as to
test hypoth eses about the differences found. This is the natural science approach to inves-
tigation, this is how generalizable theory is built, and this is primarily how scientific
advance occurs. Kerlinger (1986) states, “The basic aim of science is theory. Perhaps less

244 PART II 8 Q uANTITATIVE APPROACH~S: T YPES Of S TUDIES

cryptically, the bas ic aim of science is to explain na tural phenomena” (p. 8). Social \·vork
needs to be able to understand how personal and contextual factors important to client
welfare and human rights can be influenced, and analytic studies are needed to move the
field in that direction and thus “transform . .. human culture” (Johnston & Pennypacker,
l ~93, p. 6 ). Once the relevant variables and contingent relatio nships have been clarified
through analytic s tud ies, grou p experimental comparisons may have unique co ntribu-
tions to m ake in organ izational cost-benefit comparisons and other areas as outl ined else
where in this volume.

The Logic of Single-System Research
The basic logic underlying SSR is straightforward. Data o n the behavior of interest are
collected over a period of time until the baseline rate is dearly established. Intervention is
then introduced as data continue to be collected. In more rigorous single-system studies,
intervention is independently introduced at several points in time, while hold ing contex-
tu al conditions co nstant, to confirm the presence of functional (causal) relationships.
(Repeated measurement of the dependent variable[s] over time, therefore, is central to
SSR.) As discussed later, a great deal is now kJ1own about how to achieve high levels of
experimental control and validity in the use of these procedures.

Behaviors of interest in SSR may include those of individuals (clients, family members,
service providers, policy makers) as well as aggregate behaviors am ong a group (students
in a class, residents in a state). In addition , behavior as used here includes all ronns of
actions in context (Lee, 1988), including motor behaviors (e.g., going to bed), visceral
behaviors {e.g., bodily changes associated with emotions), verbal behaviors (e.g., speaking
or covert self-talk), and observational behaviors (e.g., hearin g or dreaming).

A number of dimensions of behavior can be explored and pote ntially changed in SSR,
including rate (frequency by uni t of time), in tensity, duration, and variability. Single-
system researchers therefore can measure the impact of intervention (or prevention) on
(a) how often something occurs (e.g., rate of suicide in a state), (b) how strongly it is
present (e.g., level of stress), (c) how long something occurs (e.g., length of tantrums),
and (d) how stable a phenom enon is (e.g., whether spikes in violence can be eliminated in
a neighborhood). Nearly everything that social work research might be interested in,
therefore, can be studied using SSR techniques, from a client’s emotional state to rates of
violations of human rights v.rithin a population.

Nearly all SSR designs depend on first establishing a stable baseline, the rate (or inten-
sity, duration, variab ili ty) of behavior before intervention. Sin ce all behavior va ries to
some extent over time, multiple o bservations are general ly necessary to establish the
extent of natural va riability. In some cases, a baseline of as few as three data points may be
adequate; in general, however, the more data points collected to establish baseline rates,
the greater th e rigor of the stud y.

Once a stable baseline has been o btained, it is possible to introd uce a systematic varia-
tion in conditions (i. e., an in ter vention, or one in a planned ser ies of intervent ions) and
to determine whether that intervention is followed by a change in the behavior(s) of
interest. The general standard for change in SSR is a shift in level, trend, or variability that
is large, clearl y apparent, relatively immediate, and clinically substantive. (Technical details
regarding how such changes can be assessed graphically and statistically are provided later
in this chapter.) Fig ure 14.1 presents the most basic structure of the approach, depicting a
clear change between phases. (Much more rigorous designs arc di scussed later in this
chapter.)

Figure 14.1
A graph of data for a
si mple single-system
research design

with successive
observations plotted
on the horizontal axis
and frequencies of a

behavior of interest on
the vertical axis. (This
graph depicts a n A-B
[baseline- intervention]
design, wh ich will be

di scussed in deta il
later in the cha pter.)

CHAPTER 14 • S INGLE- SYSTEM R£S£ARtH 24

5

3

0

Baseline

Intervention

25

1/) 20
Q)

‘()
c
Q) 15
:J
C”
Q) …
u. 10
~—o

5

0
2 3 4 5 6 7 8 9 iO

Observations

Rigorous SSR requires strong measu rement, more complex designs comparin.; mclti-
ple phases, and sophisticated analytic techniques. Horner and coll eagues (2005. Tab!e 1)
identify a series of quality in di cators that can be used to judge the rigor of single- ;-.~tern
invest igations, inc luding evaluation of descriptions and characteristics of particirants,
descr iptions and characteristics of the setting, specification of independent and depen-
dent variables, measureme nt procedures, esta blishment of experimental con~rol. and
proced ures to ensure internal, external, and social validity. All of these dimensions l’l.;n be
explored later in this chapter.

Two examp les of methodologically straightforward single-system studies illu- r .. ,e ihe
co re logic of SS R. All day an d Pakurar (2007) tes ted the effec ts o r teacher greeting::. n rates
of on-task behavior for three middle schoo l students who had been nominated o· their
teachers for consistent difficully in remaining on task during the beginning o: t!le “-1lool
day. Some existing research suggests that teacher greetings ma y have an impact on ::.:u

The rate of on-task behavior for Lhc first student immediately improved, , hile there
was no change for the other two. Shortly thereafter, the first studcn l co n tinued ~o be
greeted, the second student also began to be greeted, an d the third stud ent connnueci m
just be o bserved. On-task behavior for the firs t student rem ained high and improved rub-
stantially fo r the second, while there was no cha nge for the third. At the nex1: ob:>en”3tion
point, greetings for the third student were added; at this point, the data for all iliree
showed improvement over baseline. Each time the intervention was introduced. aac only
when the intervention was introduced, th e dependent variable showed a ch …. rl;;e. Each
time change occurred co ncurrent with intervention, the presence of a causal :elation
beca me m ore convincing, the principle of unlikely successive coincidences (Thyer & Mvers,
2007) . I n addition, two o f the studen ts showed greater improvem ents than tile third.

246 PART II • QUANTITAIIVE APPROACHES: TYPES OF STUOIES

Those data indicate that the intervention tested was adequate for the first two students
but that refinements may be needed for the third. This level of precision is critical for clin-
ical research.

In a second example, Davis and colleagues (2008) reponed a single-system study with
a 10-year-old boy who displayed multiple problem behaviors in the classroom that inter-
fered with his own and others’ Learning. After tracking his behaviors over a baseline
period of 5 days, a social sk ills and self-control intervention was initiated. As soon as these
procedures were implemented, the level of behavior problems dropped dramatically.
When the procedures were withdrawn for 5 days, behavior problems rapidly increased
again. When the procedures were reintroduced, behavior problems dropped 011ce more.
The association between use of the intervention procedure and behavior problems
becomes more persuasive each time rhey change in tan dem. Much more sophisticated
and rigorous studies arc discussed below, some involving entire states in their sampling
plans. What is important to note here, however, is the logic involved in demonstrating
influence and control by introducing and withdrawing independent variables (interven
tions) in planned ways to test: for functional relationships with dependent variables.

Rigor in SSR depends largely on two factors, the quality of the measurement used and
the extent to which the design allows the investigator to rule out alternative explanations.
In the Allday and Pakurar (2007) study, direct observation of the dependent var iable was
imp lemen ted, with two observers used dur ing 20o/o of the observations. In the Davis et al.
(2008) study, multiple measures, including direct onsite observation, were used (in lSo/o
of observations, a second rater was used). In the Allday and Pakurar study, rigor was
increased by introducing interventions one case at. a time to determine whether in terven-
tion was fun ctionally related to behavior change. By con trast, stre ngthening rigor in the
Davis et al. study involved introducin g and withdrawing procedures multiple tim es to
determine whether presence or absence of the independent variable was consistently
associated with behavior change.

Measurement in Single-System Research

There are a wide range of possible approaches for measuring independent and dependent
variables in social work research. The most widely useful methods include direct observa-
tion; self-monitoring by the client or research participant;· the use of scales, ratings, and
standardized instruments completed by the client or other ra le rs; and the use of goal
attainment scaling (GAS) or behaviorally anchored rating scales (BARS).

Observation
Observation is the m ost direct and therefore often the most precise method of measuring
behavior and behavior change. This is especially true when at least a sample of observations
is conducted by more than one observc1·, which allows the calculation of i..nterobscrvcr reli-
ability. Observation can be used to track such variables as the number of instances of self
injury, the percentage of 10-second intervals in which a student is on task, repeated patterns
that occur in family communication, or the immediate responses of decision makers to
assertive behavior by clients participating in advocacy efforts, for example.

Observation often involves less subjective judgments, inferences, or estimates than
other measures. For example, use of a rat ing scale related to the incidence of child behav-
ior problems may involve judgments as to whether the rate is “high” or “very high,” while

CHAPTER 14 • SiNGI F-SYSTEM RES£ARCH 247

a simple co unt prov ides both more precision and perhaps a less value -laden measure.
There are times when direct observation is impractical, but given irs advantages, when-
ever possible, it is the strategy of choice in SSR. The wide availability of video recording
equipment has contributed to both the practicality of observation and the possibility of
recordin g in the moment and analyzing later, anc.l it c~ n ~ lso facilitate measuring interob-
server, or interrater, reliability. (Carefu l refinement and pretesting of operational defini –
tions and training procedures should be built into observation planning, as the quality of
obtained data may otherwise be compromised.)

There are times when observation is not practical due to cost, intrusiveness, or when
rea ctivity to observation is likely to influence the behaviors oC inLcrcst. There also are
tim es when observation and recording may rai se erhi ca l issues (as in some studies of ille-
gal or antisocial behavior) . Some issues of social work concern are also not directly
observable; emotional states and covert self-talk are examples. Other measurement
app ro aches are needed under such circums tances.

Self-Monitoring
Self-monitoring (self-observation) is a common and very useful approach for data collec-
tion in social work SSR. It is often not possible for the researcher to “go home with the
clie nt” to observe, for example, ch ild beh avior prob lems (althoug h sometimes this is in
fact rea lisLic and usef-ul). From hundreds of studies, however, il is cl ear that parents can
record many kinds of data quite accurately, from the frequency of tantrums or successful
toileting to the extent to which they are frustrated with the child. Couples can monitor the
number of ca ring act ions their partners take ove r Lh e course of a week (e.g., in Stuarfs
[1 980] “caring days” procedures). Depressed individu als ca n tra ck their activities and le,–
els of satisfaction on an hourly basis to prepare for behavioral activation procedures
(Dimidjian et al., 2006; Mattaini, 1997). So long as the measurement procedures are clear
and the participant has the capacity and motivation to complete them, self-monitoring
can be both highly accurate and quite cost-effective. Simple charts that are clear and com-
muni cative for those completing them are usually essential and should be provided.
Asking people to devise their own charting system often will not produce quality data, but
collaborating with clients or participants to custom ize recording char ts can work \’ery
well (studies involving multiple clients or participants require uniformity of recording .

Self-monitoring can itself be motivating to clients and research participants, providing
immediate feedback and often a sense of control over o ne’s life (Kopp, 1993) . As a result,
self-monitoring procedures are often reactive; monitoring by itse]f may change behavior,
usually in the desirable direction. (A s imilar issue can arise with other forms of monitor-
ing, but this is a particula r issue with self-monitoring.) This can be an advantage for
in tervention, when the primary interest is in working toward the client’s goals, but can
complicate analysis in research sin ce record ing constitutes an additional active variable
that needs to be taken into account in analysis. Often the best option when reacti vity may
be a problem is to begin self-monitoring without the planned intervention and examine
the resulting data over several measurement points. If the dependent variable shows
improvement, monitoring alone shou ld be continued until a stable level is achieved
before introducin g further experimental manipulation.

Rating Scales and Rapid Assessment Instruments
When observation is not possible or p ractical, rating scales can be a useful alternative.
Either the participant (cl ient) or ano ther person (e.g., a socia l worker or a parcn 1) can

248 PART I I • QuAtHITATIVE A PPROACHES: TYPF~ OF STUOI[S

complete such scales. Self- anchored scales, for example, are completed by the cl ient- for
example, rating one’s level of anxiety on a 0 to 100 scale. Such scales often have excellent
psychometric properties (Nugent et al., 2001) and can often be completed very frequently,
thus providing fine-grained data for ana lysis. Several such scales can be co mbined, as in
Tuckma n’s (1988) Mood Thermometers or Azrin, Naster, and Jones’s ( 1973) Marital
Happiness Scale, to provide a more complete, nuanced, and mul tid imensional picture o f
personal or couple func ti oning. Clinicians can complete r ating scales (e.g., the Clinical
Ra ting Scale for family assessment; Epstein, Baldwin, & Bishop, 1983), and parents can
complete ratings on child behavior.

T hel’e are many sta ndardi zed sca les and rating sca les available; perhaps most useful for
social work p ractice an d research are rapid assessmen t instrum en ts (l~Is). RAis are brief
instruments that can be completed quickly and are designed to be completed often. As a
result, the researcher (or clinician) can collect an adequate number of data points to care-
fully track events in the case and thereby identify function al relal ionships. Please refer to
Chapter 5 (this volum e) fo r more inform ation regarding such inst ruments.

Goal Attainment Scaling and
Behaviorally Anchored Rating Scales
GAS (Bloom et al., 2006; Kiresuk, Smith, & Cardillo, 1994 ) is a measu rement and mon i-
toring approach for tracking pro gress, usually on more than one goal area at the same
time, that has been used for practice and research at all system levels. GAS can be used to
concurrently track multiple goal areas for a single client/participant system , while provid-
ing an aggregate index of progress. Tn addition, if GAS is used with a client populatio n,
the scores can be aggregated to m easure program outcomes (Kires uk e t al., 1994) .

GAS is organized around the Goal Attainment Fo ll ow-Up Gu ide, a graph ic device that
lists five levels of goal attainment on the vertical dimension (from most unfavorable out-
come thought likely to most favorable outcome thought likely) and m ultiple scales (goal
a reas) with relative weights across the horizontal. Thi. produces a m atrix; the items in the
m atrix are typically individ ually tailored to the case. The midclle level is the “expected
level of success” for that scale within the timeframe specified. A scale (or depression for a
case in which the initial scores over a baseline period ranged berween 3 1 and 49 (a clini-
cally significant level of depression) on the Generalized Conten tment Scale (Hudson,
1982) might list an expected level of 20 to 29 (subclinical ), a less tha n expected level of
30 to -19 (no change), and a most un favorable level of 50 or greater. Two levels of greater
than expected would also be identified. There might also be sca les for anxiety, ac tivity
level, and quality of partner relationship on the same follow-up guide; depression could
be weighted as twice as important as the other scales if that was determined to be the most
important goal. Books li sting many possible scale items have been produced for GAS to
assisl in preparation.

Form ul as for calculating and aggrega ti ng stand

BARS (Daniels, 2000; Mattaini, 2007) is a variat io n of goa l attainment scaling methods
in which each level is specified in clea r and observable behavioral terms. BARS can, there-
fore, combine the advantages of observations and ratings with those of GAS, allowing
aggregalion of quite different measures for program evaluation, for example. At the same
time, detailed analysis should primarily be done at the level of th e case.

CHAPTER 14 • SI NG LE·SYSTE\1 R ESEARCH 249

Existing Data
In many cases, the data needed to comp lete a single-system study are already being
collected and need on ly to be accessed. This is particularly co mmon in community and
policy-level studies. Fo r exam ple, if investigators are in terested in red ucing levels of d rug-
related and vio lent crime in a neighborhood, as in a recent st udy by Swenson and
colleagues in South Carolina, they will typically find that relevant data are collected and
reported on a regular (often monthly) and relatively fi ne- grain ed basis (Swenson,
Henggeler, Taylor, & Addison, 2005 ) . The investigators initiated combined multisys-
temic therapy and neighborhood development initiatives, viewing the neighborhood as
the single system. Usi ng routinely collected data, they discovered th at police calls for
service in the neighborhood, once one of the highest crime areas in the state, had dropped
by more than 80%.

lnterobserver Reliability
vVhen behav ior is being d irectly observed and counted or when a variable is being rated
by observers using some form of rating scale, it is often important to determine the objec-
tivity of the measures reported. The mosl common approach used to do so is to measure
the extent to which two or more observers see the same things happening or not happen-
ing. This can be particularly important when observation involves some judgment: For
exan1ple, «Was that, or was that not, an act of physical aggressio n as we have operationally
defined it?” There are a number of ways of reporting interobserver agreement. One of the
simples t and often the most useful is the calculation of percentages of in tervals in which
the o bservers agree and d isagree on the occurrence of a behavior of interest (e.g., in how
m any 1 0-second intervals was a child on task). (Sim ilar percentages can be calculated for
d uration and frequency data.) In some cases, such percentages m ay be artificially high, as
when the behavior of interest occurs in very few or in most in tervals. In such cases, statis-
tical tools such as kappa can correct for levels of agreement e}..–pected by chance. There are
also circumstances in which correlations or other indices of agreement may be useful; see
Bloom et al. (2006) for more information.

When levels of agreement are not adequate (at least 80%, but in mos t cases at least 90%
is highly desirable), a number of steps can be taken. First, the behavior(s) of interest may
need to be more clearly and operationally defined. Additional training (and often retrain-
ing over time) and monitoring of the recording proced ures may be necessary. It is also
sometimes necessary to make changes in observatio nal procedures. It is important that
wh at is asked of observers is realistic and that they do not find the procedures too fatigu –
ing, o r accuracy will suffer.

Single-System Designs

The pur pose of experimental design, whether in group experiments o r SSR, is to confirm
or disconfirm the presence of a func tional relationship between the independent va ri-
ables/interventions and the dependent var iable(s), ruling out alternative explanations of
chan ge to the extent possible. In group experiments, this is commonly done using con-
trast groups or a variety of quasi-experimental manipulations. In SSR, target systems
commonly serve as their own controls, using patterns of change over time. Some of the
most common SSR designs are briefly summarized in this section.

250 PART 11 • QUAtlliiAIIVE APPROACHES: TYPES OF SruOIES

The A-B Design
The simplest single-system des ign that can be used for research purposes is the A-B
design. In this design, observations are collected over a period of time prior to introduc-
tion of the experimental manipulation; data collection should continue until a stable
baseline has been establi shed. Generally, more baseline data po ints are better than fewer
because it is more likely that the full patlern will emerge with an extended baseline and
because the number of analytic possibilities expands with more data points. Once a stable
baseline has been established, the intervention is introduced while observa ti ons co ntinue
to be collec ted, typically for about as long as the baseline data were collected. Tf the depen-
dent va riable changes quickly in a very apparent way, as in Figure 14. 1, there is some evi-
dence that the intervention may be responsible for !he change.

It is poss ib le, however, that something else occurred at the same time the intervention
was introduced, so the evidence is not as strong as that provided by the more rigorous
designs described later.

Note that A-n designs are a substantial improvement over case stud ies in which no
baseline data are collected. (These are referred to in the SSR literature as B designs sin ce
the label A is always used for baseline phases and B for the [first] intervention phase.) In
a B design, data are simply collected during intervention; such a design ca n be useful for
cli nica l monitor ing but does not provid e any information regarding causation (the pres-
ence or absence of a functional relationship). Such case studies are therefore not generally
useful for SSR.

An example of the use of an A-n design is Nugent, Bruley, and Allen (1998), who tested
the impact of introducing a form of aggression replacement training (ART; Goldstein,
Glick, & Gibbs, 1998) in a shelter for adolescent runaways, in an effort to reduce behavior
problems in the shelter. They in troduced the inter ve ntio n at a point when they had 310
days of baseline data available and continued to monitor data for 209 days after the intro-
duction of ART. While the investigators used very sophisticated statistical analyses (dis-
cussed later) in the study, in terms of design, this study was a straightforward A- B design.
Given the lon g basel ine, the relative stab ility of improvement over a 7-month period, and
the small statistical probability of a change of the magnitude found occurring by chance,
the data are arguably persuasive despite the limitations of A-B designs.

In so me situa tions, mu ltiple cases facing similar issues may be of interes l. For exa mple,
a clinician-researcher may be in terested in the value of a group for parents of children
with developmental disabilities. The extent to which each group member implements a
particula r parenting technique migh t be one of several dependent variables of interest.
Each pa ren t, therefo re, wou ld be the subject of a separate study, and the ir data could be
tracked on a separate graph. Most likely, however, the researcher is also interested in the
overall utility of the group. ln !his case, data for all parents could be shown on a single
gra ph, along wilh a li ne showing mean data across all group mem bers; see Figu re 14.2 for
an example (see Nugent et al., 200 l, for more information).

Another common situation is one in which multiple dependent variables for a single
case are o f interest, for example, m ultiple dimensions of satisfac tion with an in timate
partner relationship. In this situation, multiple lines, one for each variable of interest,
can be plotted on a single graph. Progress on each but also the pattern among several
var iab les can then be assessed. Social workers are o ften interested in simultaneous
progress on several issues or goa ls, and SSR research can be idea l for tracki ng such cases
and for studying multiple functional relationships at one time (see also multielemenl
designs below) .

70

60

50

~
8 40
(/)

~ 30

” 20
10

C HAPTER 14 • S ING I F-5YSHM R£Sf.UC.- 251

Baseline Intervention

0 +—~—-~—r—-r—-r~~—-~–~—-~–~

2 3 4 5 6 7 8 9 iO
Measurement Points

Figure 14.2 A graph showin g hypothetical resu lts of behavioral activation treatme nt for de::;~~­
with four clients. Each lin e with open symbols represents one client; the darker line with dosa: o”‘des
shows the average score across cl ients at each point in ti me. Note that the average level o~
depression, as measured by the Generalized Conten tment Scale (Hudson, 1982), is in creas. .-.g :…..:-g
base line, but that two of the four cases are primarily responsible for the increase (and ma”J t:=-:~-‘”E

need rapid intervention). There is an evident change on average and for each case beg inn””:; :a:
intervention is initiated.

Withdrawal Designs

The study by Davis and colleagues (2008 ) discussed earlier in this chapter is an ~,?k
of a withdrawal design.• It began with the co llection of baseline d ata for se\t:ral d….”!’Si an
in tervention package was then introduced while data continued lo h e collected. TI’1::’ m·er-
vention was withdrawn after several days and then reintroduced se\·era:. cin”S lzter.
Behavior improved immediately and significantly each time the intervention pa … ~e- was
introduced and worsened wh en it was withdrawn, suggesting stron gly that the b-a-.-en-
li on was responsible for the change. T h is A-B-A-B is the standard pattern for “”:d0.:~wal
designs; with replications, it ca n be a very powerful design , although it is not a~ fit for
every situation. See fig ure 14.3 for th e basicA-B-A-B model.

For example, I once worked in a partial residential program with adoi~m.s “‘ith
severe autism. Many of the behavior-analytic interventions we used and as~ fa..-nilies

252 PART II • QUANTITATIVE AP PROACII[S; TYPES Of STUDI ES

Functional Assessment and Treatment
100

FBA and
~ 90 Baseline Intervention Reversal Reinstatement
0
·:;:

80 til
0 .s::.

C1l II

ID 70
p
• I

C1l
; i

> i i :;: 60 / \ a. til
“0

50
/ 1

cv ; !
(ij

/~ ~ ~ 40 -0 p u
C1l I

30
I

P·-cl C’) / cv i …
0 -•. r:/ c: 20 d (I) D., u …. /!!. ·o .. n . C1l

‘ c.. 10 ‘ ‘D—o

0
1 2 3 4 5 6 7 8 9 10 111 2 131415 16 17 18 19 20

Observation Sessions

–·O··· Self-Initiated –A- – Teacher Attention – +– Peer Attention —- Academic Escape

Figure 14.3 This graph, from Davis et al. (2008), is an example of a withd rawal design (A-B-A-8). The figure depicts t he

percentage of overall time intervals during whi ch each of several su btypes of maladaptive behavi ors occurred during initial

base line, first intervention, withdrawal, and reinstatement of interven tion. The percentage of intervals in which maladaptive

behaviors occurred overall is quite high in the first baseline phase and also in creased rapidly during the return to baseline.

(Note th at the withdrawal phase is labeled reversal, as is com mon in the literature; see No te 1.)

SOURCE: © 2008 Davis et al.; reprinted with perm ission.

change. ll was common under those circumstances to discontinue the intervention
briefly; if performance suffered, we could be relatively sure that the in tervention was
func tionall y related to the behavior and that we n eeded to co ntin ue it. After some time,
however, it co mmonly made sense to again withdraw th e interventio n to determine
whether natural consequences had become powerful enough to mainta in the behavior on
their own .

Putnam, Handler, Ramirez-Plat t, and Luiselli (2 003 ) used a withdrawal design to
improve student behavior on school buses. The school involved was a low-income, urban
elementary school in which behavior problems on buses were widespread. The interven-
tio n involved working with st Lld ents to ident ify appropriate beh aviors (a shared power
technique; Manaini & Lowery, 2007) and subsequently reinforcing appropriate behaviors
by means of tickets given to students by bus drivers, which were entered into a pri ze draw-
in g. This was no t an extrem ely labor-intensive a rrangem ent but did requi re consistency
and coordinalion. The intervention package was therefore introduced fo r several mon ths
follow in g a baseline period and then withdrawn. Office referrals and suspensions for bus
b ehavior went down dramatically during the in tervention period but increased again
during the withdrawal phase. When interve ntion was reintro duced, problem data again

CHAPTER 14 • S ING I E- 5 YSHM R ESEARCH 253

declined. It continued to be relatively \ow during several months of follow up, when the
program was maintained by the school without researcher involvement.

Withdrawal designs are clea rl y not appropriate under many circumstan ces. There are
often ethical issues with withdrawing treatment; stakeholders also may raise reasonable
objections to withdrawing treatment when things are going well. Furth ermore, so m e
interventions by design arc expected to make irreversible changes. Fo r example, cogni tive
therapy that changes a client’s perspective on the world is designed to be short term, and
the results are expected to last beyond the end of treatment. It might be logically possible
but would certainly be ethically indefensible to use the techniques of cognitive therapy to
try to change self- talk from healthy back to unhealthy and damaging, for example (this
would be an examp le of an actual reversal design ). Luckily, other rigorous designs d is-
cussed below can be used in c ircumstances where withdrawal or reve rsal are unrealistic o r
inappropriate.

Variat~ons of Withdrawal Designs
Several variations of withdrawal designs can be useful for special research and practice
situations. One of these is the A-B-A design. Following collection of b aseline data, the
in terven tion is introduced and subsequen tly discon tinued. This design is not generally
usefu l for clinical studies sin ce it is applicable only in circumstances wh ere the expecta-
tion is that the impact of the intervention is temporary, and the study ends with a baseline
phase, potentially leaving a client system in the same si tuation he or she was to begin with.
There are times in research, however, when the research interest is no t immediately cl ini-
cal bul rather a simple question of causality.

Another occasionally useful design is lhe B-A-B design, which involves introducing
an intervention for a period, withdrawing it, and th en reintroducing it. T his is not a
common or particula rly strong design for research p urposes but does p ermit exam ining
chan ges in the dependent variable concurrent with phase changes. It has been used in
some clinical studies where the issue of concern required immediate intervention, and
questions arose as to the need to continue that intervention. There are also times when
a comp lex and expensive intervention is briefly withdrawn to be su re tha t it is still
needed. Imagine, fo r exam p le, that a child with a se rious disability is accompanied by a
one-on-one aide in a school setting. Given the costs involved, withdrawing this inten-
sive se rvice to determine whether it is necessary may be practi cally necessary. If behav-
ior problems in crease wh en the aide is withd rawn and decrease when the a id e is
s ubseque ntly re instated, it suggests both that the presence of the aide is 11 ecessary and
that it is functionally related to the level of problem behavior. (On occasion, B-A-B
research repor ts are the result of unplanned interruptions in service, as when the
person providing th e interventio n becomes ill for a period of time or is lemporarily
assigned to other tasks.)

Multiple Baseline Designs
While withdrawal designs offer considerable rigor, lhe need to withdraw service often
precludes their use in both practice and research. Another SSR strategy also can provide
strong evidence of a functional relationship between inde pendent and depend ent
variables, a set of design types called mu l.tiple baseli ne (MB) designs. T he heart of MB
designs is to concurrently begin collecting baseline data on two or more (pre ferably three
or more) sirniJar “cases;’ then introduce a common intervention with one case at a time,
while collecting data continuously in all of the cases. See Figure 14.4 for the basic MB model.

254 PART II • Q UANTITATI VF APPROACIIl~: TYPES Of STUDIES

100 Baseline Teacher Greeting
..\1:

90 U)
{2

I 80 c:
0

70
..!!.!
Ill

60 >

Qj
‘E 50 – 40 v 0 Q.1 C) 30 Ill ‘E
Q.1 20
0 Tim
Qj 10 a..

0

—– 1

I
I

100
l
I

..\1:
I

U) 90
{2
c. 80
0

70 U)
iii

60 >

~
Qj

£ 50
0 40
Q)
C) 30 Ill
‘E

20 Q)
0 Kay ….

10 Q.1 a..
0

I
l
I
— – — – – – – – 1

I
I

100
I
I

..\1:
I

90
I

U)
I

{2 I
c. 80

v~-~
0

70 ..!!.!
Ill

60 > ….
Q)

‘E 50
0 40
Q)
C) 30 Ill …..
c:

20 Q)
0 Jon Qj 10 a..

0
2 3 4 5 6 7 8 9 10 11

Observations

Figure 14.4 A multiple base line across clients study, taken from the study by Allday and Pakurar (2007, p. 3 19), descri bed

earlier in the chapter. Note that results for the first two clients are more persuasive than for the third, where there is overlap

between baseline and interventi on, although the averag e is improved. Th is might suggest the need for additional

in tervention intensity or alternative procedures.

SOURCE: @ 2007 Journal of Applied Behavior Analysis; reprinted with permission.

CHAPTER 14 • SII’ICtE- SYSHM R tSfARCH 255

Th e “cases” in MB designs may be individual systems (clients, neighborhoods, even
states) but may also be settings or situations (school, home, bus) for the same client or
m ultiple behaviors or problems. In MB research, the intervention must be common across
cases. The Allday and Pakurar (2007 ) stud y depicted in Figure 14-4 is an example in which
a friendly greeting is the common manipulation. As with withdrawal designs, if a change
in the dependent variable consistently is associated with intervention, the evidence fo r a
functional relationship increases with each case (particularly with replications, as dis-
cussed later) .

An interesting example of an MB across cases study was reported by Jason et al. {2005 ,
who tested an approach for starting Oxford House (Oil ) programs (m utual help recovery
homes for persons with su bsta nce abuse iss ues). OH programs appear to be cost-effective
and useful for many cl ients. Jason and colleagues were interested in whether using state
funds to hire recruiters and establish startup loan funds would meaningfully increase the
number of homes established. Baseline data were straightforward; there were no OH
programs in any of the 13 stales stud ied d uring a 13-year baseline period (and probably
ever before). As the result of a fede ral-level policy change offer ing funds that states might
use in this manner, the recruiter-loan package was mad e available in 10 states. The number
of OH homes increased in alllO stales over a period of 13 years, sometimes dramaticall y;
515 homes were opened in these 10 states d uring th is time. Durin g the first 9 of those years,
data were also ava il able for 3 states th at did no t es tablish the recruiter-lo an arrangement; a
total of 3 OH homes were opened in those states during those 9 years. The recruiter-_loan
arrangement then became available to those states, and immediate increases were seen,
with 44 h omes open ing in a 4-year period. See Figu re 14.5 for the data from this study.

This is somewhat of a h ybrid study, with multiple conc urrent rep lications in each
phase. Overall, the data dearly support the conclusion Lhat the recruiter-loan package was
responsible for th e dramatic increases in OH hom es, in every state studied. This investi –
gati.on also shows the potential fo r use of SSR in comm unity and policy-level research.

An exam ple of an MB across settir1gs/situations study is fo un d in Mattaini, McGowan,
and WilEams ( 1996). Baseline data were collected on a mother’s use of positive conse-
quences for appropriate behavior by her developmentally delayed child, as well as other
parenting behaviors not d iscussed here. In the situa ti ons in which tr ain ing occurred,
includ ing p utting away toys, playing with broth er, and mealtimes, baseline cla la were col-
lected within each of those settings for five sessions. An intensive behavioral training
program was then conducted in the putting away toys situation only. This resulted in a
large and immed iate improvemen t in use of positive consequences in t hat condition, a
very small carryover effect in the playing with brother cond ition, and no change in the
mealtime condition. Training was then implemented in the playing with brother condi-
tion, resulting in a significant increase; improvement was maintained in th e putting away
toys condition, but there was still no improvement in the mealtime condition. When the
inlcrvc ntion was in troduced there, immed iate improvement occurred. In other words,
each time the training intervention was introduced, and only when the intervention was
introduced, a large immediate effect was apparent.

By now the basic MB logic is probably cl ear, and research using MB across
behaviors/problems is li mited, so this d iscussion will be brief. The most li kely situati on
that would be appropriate for this kind of design, for most social workers, would be the
use of a relalively standardized intervention such as solution-focused brief the rapy
(SFBT) to sequentially work wit h a cl ient on several problem areas. For example, if a teen
client was having co nflict with his custodial grandmot he r, was doing poorly academically,
and had few fr iends, SFBT might be used sequentially with one issue at a time after a
baseline period. (There is some risk of treatment for one issue carrying over to others

256 PART II • QuANTITAIIVE APPROACHES: T YPES 01 STUDIES

Base line Intervention
100
95
90
85
80
75

“0
70 (I)

c:
65 (I) c.

0 60
t/) 55
:I:
0 50 -0 45
….. 40 (I)
..0

35 E
:::J 30 z

25
20
15
10

5
0 0 -o—o—::::

“0
Ql
c:
(I) 30 c.
0 25
t/) 20 :I:
0 15 -0 10 …

5 (I)
..0 0 E o-c-o-o-o—o—-o-o–o-o–o-o—-o-~~
:::J
z 1 5 10 15 20 25

Years

Figure 14.5 Cumulative Number of New Oxford Houses Ope ned in Two Groups of States Over T ime as a Function of
Recru iters Plus a Loan Fund Intervention

SOURCE: Jason, Braciszewski, Olson, and Ferrari (2005, p. 76). © 2005 Leonard A. Jason, Jordan Braciszewski, Brad ley D. Olson, and Joseph R.
Ferrari; reprinted with permission.

in such circumsrances, however.) Another example would be the use o f a point system in
a residentia l prog ram, in which a clien t’s mult iple p roble ms might be seque ntially
included in the point system.

Changing Intensity Designs
As discussed by J3loom ct al. (2006), there are two types of changing intensity designs. In
a changing criterion design, goal levels on the dependent variable are progressively stepped
up (e.g., a n exercise program wi th higher goals each week) or down (e.g., a smoking ces-
sation program in which the target number of cigarettes smoked is progressively reduced,
step by step, over time) . If levels of beh avior change in ways that are consis Lent with the

CHAPTER 14 • SINGLL- SYSTEM R ESEARCH 257

plan, a ca usal inference is suppor ted, at least to a degree. In a changi11g program design, the
inte nsi t y of an interven tion is progressively stepped up in a planned manner. For
exampl e, the munber of hours per week of one-on-one intervention with an autistic child
might be increased in 4-hour increments until a pattern of relatively consistent improve-
ment was achieved. T his design is more likely to be used in clinical and e:>-..-ploratory studie ,
where tl1e required intensity of interven tio n is unknown.

Multielement Designs

Alternaling Tnterventions Design One SS R design with considerable util ity for clinical and
direct practice research is the alternating in terventions or alternating treatments design,
the most common of the so-called multielement designs (one o ther, simultaneous inter-
ventions) is discussed below). In this design, two or more interventions are rand omly and
rap id ly alternated to determine the diffe ren tial impact of each for the subject (or group of
subjects). For example, Jurb ergs, Palcic, and Kelley (2007) tested the relative utili ty of two
fo rms of school -home notes on the performance of low-income children diagn osed with
altention-deficit hyperactivity disorders. A school -home note is a da ily report on student
performance and behavior sent home by the teacher; parents provide previously agreed
rewards based on those reports. In this stud y, one type of note added a loss of point:.
(a minor punishment contingency) arrangement to the st.andard note. Which type of
note was used each day was randomly determined; stud ents knew each d ay which note
they were using. Both produced large results in academic performance and on-task
behavior, with no mea ningful differences found between the two cond itions. Nonetheless,
paren ts preferred th e notes that included the pun ishment arrangement. T hi s study also
invol ved a wi thdrawal phase> so it is actually a hybrid invol ving both a lternating in terwn-
tions and an ABAB with follow-up design elements. Figure 14.6 snows data for one of the
si.x subjects in the study.

In a second example, Saville, Zinn, Ncef, Van Norman> and Ferre ri (2006) compared
the use of a lecture approach and an alternative teaching method called interteachi11g for
college co urses. ln tcrte:~ching involves having students work in dyads (or occasionall~ in
groups of three) to discuss study q uestions together; lect uring in interteachin g courses
typically is used on ly to clarify areas that students indicate on their inteTteaching record:.
were difficult to understand. (There have been several ea rlier studies of in terteaching
[e.g., Boyce & Hineline, 2002; Saville, Zinn> & Elliott, 2005), all of which in dicate that
stu dents perfo rm hetter on examinati ons and prefer interteaching; clea rly, thi s technique
needs to be more widely known in social work education.) In the first of two studies
reported by Saville and colleagues (2006), which of the two techniques would be used
each day was randomly determ ined. Quiz scores on the days when lec ture was used aver-
aged 3.3 o n a 6-point scale, whi le scores on interteaching days averaged 1.7 (and had
much smaller variance). Tn the second study repo rted in this article, t\vo sections were
used. Each day, one rece ived lecture and th e other in terteaching. Test scores for
inte rteaching were higher in every case for the section using interteachi ng on that day.

There may be order and carryover effects in some alternating intervention studies (e.g.,
which inte rvention is experienced first may affect the later results) , but those who have
st udied t hem believe that rapid alternations and counterbalancing can successfully mini-
mize such effects. It is also always possible that the alternation itself may be an active vari-
able in some cases, fo r example, because of th e novelty involved or minim izing satiation
related to one or both techniques. Usuall y, a m o re significant concern in alternating in ter-
vention studies is to determine how big a difference between interventions should be

258 PART II • QuANTITATIVE APPROACHES: T YPES Of STUOIES

Baseline

100

Lauren

Treatment Baseline Treatment Follow-up

p. –6: Q !
~ 90 /_-2k.Q-· 0 I I t;:) n.e,!
lG 80 If \ 1 / \ I .8::..!” !

G – — – —cr– — – 0

~ ~~ Vvi A’ 0 ~~~–~- w :
~ 50 f6 :
‘E 40 : :

0.. 10 : ‘
~ ~~u: . ~

0
1

I I I I I I I I I I I I I I I I I ; I I -r-r-1 T”i < i-,.--T"i "'T"I """11-r-T"j ""Tj-jr-T""""Tj """llr"'T"""II

5 9 13 17 21 25 29 33 37 41 45 49
Observation #

-+– Baseline – G- Respon se Cost –;;:,.– No Response Cost

Figure 14.6 Results for one case in the study by Jurberg s, Palcic, a nd Kelley (2007, p. 369) of t he use of school notes of
two types. In the response cost con dition, a mild punishment condition was added to the standard reward arrangement. In
the no-response cost condition, on ly the reward arrangement wa s in place. This is a n alternating inte rventions study; notice
how the two co nd itions are intermixed in random order during the treatment phases.

SOURCE: © 2007 School Psychology Quarterly; reprinled with permission.

regarded as meaningfu l. Using visual a nalysis, as is Lyp ical in such studies (see below) , the
most important question is commonly whether the difference found is clinically or
socially meaningful. It is also in some cases possible to test differences statistically, for
example, using online sofh.vare to perform randomizaLion tests (Ninness et al., 2002).

Simultaneous Interventions Design T here is also a little used design discussed by Barlow
and Hersen (I 984) called the simultaneous inLcrvcntions or sjmultaneous treatments
design, in which multiple intervenLions are provided at the same time. In the example
they provide (Brow ning, 1967), different staff members handled a child behavior problem
in different ways, and data were collcc Lcd on freque ncy of time spent with each staff
member. The underlying assumption of the study was that the ch ild would spend more
time with staff members whose approaches were Lhe least aversive. No examples of this
design appear to be present in the social work literature and few anywhere else. None-
theless, because the logic of the design for questions related to differential preferences is
in triguing, it is included here so that its potential not be forgotten.

Successive Intervention and Interaction Designs

In some cases, the best way to compare two or more interventions is to introduce them
sequentially, thus producing an A-B-C, A-B-C-D, A-B-C-B-C, or other design in which
the alternatives are introduced sequentially. For exa mple, after a baseline p eriod in which
crime data are collected, intensive police patrols might be used in a neighbor hood for
4 weeks, followed by 4 weeks of citizen patrols (A- 13 C design). If substantially different
crime rates are found while one alternative is in place, there is at least reasonable evidence
of di ffcrential effectiveness. The evidence coul u be strengthened using various patterns of

CHAPTER 14 • S INGLE· S YSTEM R eHARCH 259

reversal or wit hdrawal of con di tions. For example, if the data look much beller when
citizen patrols are in place, it may be important to reintroduce poli ce patrols again,
followed by cit izen patrols, to see if their superiority is found consistently. If neither
shows evidence of much effect, they might be introduced together (A-B-C-BC design), or
another app roach (say, an integrated multisystemic therapy and neighborhood coali tion
strategy; Swenson et al., 2005) might be introduced (A-B-C-D design).

There can be analytic challenges in all of these sequential designs. All can be strength-
ened somewha t by reintroducing baseLine conditions betwee n intervention phases (e.g.,
A-B-A-C or A-B-A-C A D). A further issue is the o rder in which interyentions are intro-
duced. For example, citizen patrols logically might o nly be effective if they are introduced
after a period of police patrols, and the design described above can not distinguish
whether this is the case, even with reversals. It may occasionally be possible to compare
results in different neighborhoods in which interventions are introduced in different
orders, but the reaLities of engaging large numbers of neighborhoods for such a study are
daunting. Bloom e t al. (2006) and Barlow and Hersen (1984) discuss designs that may be
helpful in so rting out some interaction effects. For example, Bloom and colleagues
describe an interaction design consisting of A-B-A-B-BC-B-BC phases that allows the
investigator to clarify the effects of two interventions separately and in combination.

While the designs described above are relatively standard, il is common for investiga-
tors to assemble elements of multiple designs in original ways to fit research questions
and situations. Once one understands the logic of the common approaches, h~ or she can
go on Lo develop a hybrid design or occasionally an entirely novel approach th at remains
consistent with the level of rigor required.

Internal, External, Social,
and Ecological Validity in SSR

A number of threats to internal and external validity need to be considered to determine
how strongly to trust causal assertions about the impact of independent variables
(vs. possible rival explanations). The same threats generally need to be considered for
both group experiments and SSR, although in some cases, control for those threats is
established in different ways. See Chapter 4 for general informat ion related to threats to
internal and external validity.

Internal Validity in SSR

Some threats to internal validity that are handled differently in SSR include history, mat-
uration, and regression to the mean. Sequential confounding is an additional threat that
is commonly a greater issue in SSR than in group designs, simply becau se group designs
tend to be limited to a single intervention.

History. Events other than the intervention could be responsible for observed change.
As noted earlier, in SSR, one approach for controlli ng for history is through introducing,
withdrawing, and reintroducing intervention with tl1e expectation that external events
are un likely to occur multiple times just when intervention does (see Withdrawal Designs,
above). A second approach involves the use of staggered baselines across persons, se ttings,
or bt:haviors, which is based o n the same principle (see Multiple Baseline Designs, above).
By contrast, in group experiments, tl1e mosl common control for histo ry is the use of

260 P ART II • QUANTITAIIV~ APPROACHES: TYPES OF STUDIES

random assignment to in tervention and control/comparison groups, on the assumption
that external events on average should affect both groups equally.

Maturation. Maturation refers to the effect of ongoing developmental processes- for
exampl e, the effects of aging on performance in many areas. Group ex periments again rely
on rru1dom assignment here, while single-system designs generally rely on withdrawal/
reversal or multiple baseline approaches. If intervention effects occur only when inter-
vention occurs, maturation is unlikely. The more cases or reversa ls that are studied, the
more persuasive this argument will be.

Regression to the Mean. In both gro up experiments and SSR, study participants are com-
monly selected because they are experiencing acute symptoms, when problems may be at
their worst.lt is therefore likely that at a later Lime, problem levels wi ll naturally be so m e-
what lower. In group experiments, tl1e impact of this effect is likely to be about the same
across groups; the primary related problem in those studies is that regression may add
measurement error to the analysis. ln SSR, the best way to contro l for regression is ro
ensure that the baseline phase is long enough to demonstrate stability.

Sequential Confounding. As briefly discussed in the earlier section of this chapter on
successive intervention and interactions designs, in SSR involving more than one inter
vention p hase (e.g., an A-B-C desig n), it is !JOSsible that the effec ts of a later interventio11
phase may be potentiated or weakened by an earlier intervention in the series. It is nor
always possible to completely eliminate this threat to internal validity, but it is often
possible to reduce the likelihood of interference and interact ion by returning to baseline
conditions (e.g., A-B-A-C) or counterbalancing (e.g., A-B-A- C-A-B ). Replications in
which the order of ph ases is counterbalanced across cases can provide even stronger data
for exploring interactions.

External Validity (Generalizability) in SSR
Researchers arc usually interested in in terventions with wide applicability. They want to
assist th e participants in their own study, but they are hoping that the results will apply to
a much broader population. In nearly all cases, however, in both SSR and group experi-
ments, study pa rticipants are drawn from those who arc relatively easily availa ble,
and convenience samples arc the norm. Despite efforts to ensure that the study sample is
” representative” of a larger population, there is really no way to know this without draw-
ing a random sample. Random samples can only be drawn when an exhaustive list of the
population of interest is available, which is ~cldom the case. \Nhilc there are lists of regis-
tered voters o r li censed social workers within a state, no such list exists of persons meet-
ing lhc criteria for schizophrenic disorder, of battered women, or in fact o f most
populations that social work research and practice are interested in. In general, no exisl-
ing methodology prov ides assurance of generaJizability of re!>ults tO a larger population in
most cases; rather, a logical case must be made for the likelihood of external validity.

In group designs, if random assignment to intervention and con trol groups is not pos-
sible, the question of generalizability becomes even more difficult. Adding more partici-
pants docs not help much in establishing external validity when samples cannot be
randomly selected from the populat ion. \1\lhi le larger groups provide better statistical
power to determine differences between the groups in the study sample, they are not
necessarily more representative of a larger population. (Be careful not to co nfuse random
assignment to groups with random selection from the population of interest. ) The

CNAPTEK 14 • S I NGLE- SYSTEM RESEARCH 261

actuar ial nature of most group experiments is also a threat to external validity, in that
many in the experimental group often do not have good resulls, but we usually have no
information as to why some and not others appear to benefit.

In the case of SSR, while the general concerns about generalizability in all experimental
studies are also applicable, there is an established approach for building a case for gener-
alizability through replication. Tn direct replication, once an effect has been obtained with
a small number of cases, the experiment is repeated by the same experi menter in the same
setting with other sets of cases; the more such replications that occur, the stra nger the case
for a rea l effect. Direct replications can be followed by systematic replications, in which
one or more of the key va riables in the experiment are varied (e.g., a different expe ri-
menter, a different setting, or a somewhat differen t clien t issue), and th e data are exam-
ined to de termin e whether the same effect is found. Clinica l replications ma y follow, in
which field testing in regular practice settings by regular practitioners occurs. The more
consistent the results across replications, the stronger the case for gcncnt li zab ility; in addi-
ti on, cases that do not produce the expected results can be f urther analyzed and va riations
in troduced, increasing both knowledge and perhaps effectiveness with unique cases.

Replicat ion is dearly importan t and all too infrec1uenr ( in bot h SSR and group experi-
ments, in fact) . One criticism often heard of SSR is a concern abo ut the small number of
cases. Ce rtainly, results with one or a small number of individuals arc nol as persuasive as
those t hat have been wi dely replicated . On the other hand, most im portant in tervention
findings begin with sm all numbers of cases and are strengthened through multip le repli-
cations. For example, Lovaas’s (1987; McEachin, Smith, & Lovaas, 1993) groundbreaking
findings about the possibility of mainstreaming many autistic children using applied
behavior analysis emerged from a long series of direct and systematic replications; group
comparison studies were useful only after the basic parameters had been clarified through
SSR. At the same time, so long as samples for either SSR or group e:xperiments are not
randomly selected from the population of interest, neither large nor small samples can be
regarded as representative of that population, and external validity relies primarily on
establishing a plausible logical case.

Social and Ecological Validity in SSR
Social validity, as the term is typically used in SSR, refers to (a) the social significance
of the goals esta blished, (b ) the social acceptabi li ty of th e intervention procedures used,
and (c) th e social importance of the effects (Wolf, 1978) . Anoth er use of the term social
validity is th at of Bloom et aL (2006), who use the term to refer Lo “Lhte extent to which a
measurement p rocedure is equally valid for clients with different social or cultural char-
acterist ics” (p. 85) . This is clearly a different constru ct, however.

An intervention directed toward a goal that is not valued by clien ts or community; that
rei ies on procedures that stakeholders find too difficult, expensive, or unp leasant; or that
produces weak effects from the perspective of those stakeholders may be scientifically
interesting but lacks social validity. (Social importance of the effects of in tervention has
also been called clinical significance.)

Social validity is clearly cen tral to social work, as th e mission of social work ties it fun-
damentally to issues of social importance at all system levels. Increases in internal validity
sometimes reduce social validity; this is one of the central challenges to applied research.
for example, it is relatively easy to introduce new practice for constructing developmen-
tally n u tritive cultures in schools when problems arc few and lhe available resources are
great; there is a large literature in this area . Our work suggests that it is much more diffi-
cult to introduce such changes in poor inner-city schools in neighbor hoods where the

262 PART I I • QUANTITATIVE APPROACIItS: TYPtS OF STUDIES

rates of vio lence, drug crime, and family breakdown are high and resources sparse
(Mattaini, 2006). Yet this is often where the need for social work intervention is highest,
and a human rights framework suggests that we have an obligation to provide the highesl
quality services in such settings.

Ecological validity involves the extent to which observational procedures or other con-
textual parameters of the intervention are consistent \vtth natural conditions (Barlow &
Hersen, 1984). A critical consideration here is reactivity, the extent to which clients
respond differently because of their awareness that they are bei ng observed. A number of
stra tegies for reducing the possible effects of observation have been developed, including
unobtrusive measures, relying on observations by those who are n atu rally presen t, and
providing opportunities for those observed to acclimate to the presence of observers or
observational technologies before formal measurement begins (Barlow & Hersen, 1984 ).
There is no way to protect completely from reactivity in either SSR or group experiments,
but SSR does offer the possibility of varying observational procedures as one of the active
variables built into the study. It is also possible to vary other contextual parameters in a
delibera te and planned way within SSR, and it is often possible to conduct such research in
natural settings (e.g., homes and classrooms) in ways Lhat vary little from usual conditions.

Analysis of Single-System Data

T here are two basic strategies for the analysis of SSR data: visual analysis and statistical
analysis. Each has its strengths and limitations, but in some studies, it is possib le to use
both to explore the data more fully.

Visual Analysis
Visual analys is has been the primary approach used in the evaluation of SSR data from
the beginning and is based on the assumptio n that only effects that are powerfu l enough
to be obvious to the naked eye shou ld be taken seriously. According to Parsonson and
Baer ( 1978), “Differences between baseline and ex-perimental cond itions have to be clearly
evident and reliable for a convincing demonstration of stable change to be claimed . . . an
effect would probably have to be more powerful than that required to produce a statisti-
caUy significant change” (p. 112). (Note that the magnitude of change sought visually is
conceptually related to effect size in statistical an alysis.) This search for strong effects is
consistent with common social work sentiment, in that most clien t: and community issues
with wh ich social workers intervene are q uite serio us and require ver y substantial levels of
change. The change sought in visual analysis usually is in mean levels of a problem or goal
over time (e.g., is the client more or less depressed than during baseline?). Besides level,
however, both trend (e.g., has a problem that was getting worse over time stabilized or
begun to in1prove?) and variability (e.g., has a chi ld’s erratic behavior leveled out?) are
also often important co nsiderations.

Vis ual analysis relies on graphing; note the graphs used in earli er discussions of SSR
designs in this chapter. Strong, consistent effects shou ld be immediately obvious to the
observer, and multiple independent observers should agree that an effect is present to
accept that change is real. One common standard for judging the presence of such an
effect is the extent of overlap in data between the baseline phase and the intervention
phase. If there is no overlap (or almost none when there are many data points), the presence
of a real effect usually ca n be accepted with co nfidence (see the left panel of Figure 14.7) .

CHIIPTER 14 • SINGLE · SYSTEM RESEIIRCH 263

Figure 14.7 The data on the left panel show a dear discontinuity at the point of intervention , with
no overla p between ph ases, suggesting an intervenlion effect. The data shown on the right, despite
the nearly comp lete overlap between phases, are also convincing, and a dear trend reversed

dramatically at the point of intervention.

I
I
I
I
I
I
I

~~
~!

I
I

~0

Figure 14.8 The data on the left pa nel show a t rend in the basel ine data that genera lly continues
into the intervention phase, suggesting little or no effect. By contrast, there is a clear discontinuity of
level at the po int of intervention in the data on the right, which suggests a n effect even though the
slopes within phases are simi lar.

Useful as that criterion can often be, there are other types of obvious effec ts. For
example, the righ t panel of Figure 14.7 shows baseline data for a p ro blem to be escalatin g
over time (an upward trend). ‘When inlcrvention is introduced, Lhe trend reverses. While
there is complete data overlap between the phases, a strong and convincing effect is clearly
present. On the other hand, as shown in the Jcrt panel of Figure 14.8, when there is a trend
in the baseline data in the desired direction, and the trend appears to co ntinue into the
intervention phase, one cannot assume a n interven tion effect. On the other hand, if there is
a distinct discontinuity, as in the right panel o f Figure 14.8, the eviden ce for an effect is
more persuasive.

To be fully persuasive, changes in level, trend, or variabi lity sho uld usually be relatively
immediate. The changes see n in Figure 14.7 arc examples; in both cases, change began
occurring as soon as in ter vention was introdu ced. If intervention requires a number of
sessions, days, or weeks to begi n to show an effect, Lbc graph will usua ll y be m uch less
convincing. An excep ti on to this principle wou ld be a situation in which one predicts in
advance that change will not occur for a prespecified period of time, based on a co nvinc-
ing rationale. If change occurs la ter as predicted, the data would also be persuasive.

264 PARl II • QuANTITATIVE APPROACHES: TYPES OF STUUILS

What if the patterns identified in the data are ambiguous (Rubin & Knox, 1996)?
Difficult as it may be for an investigator to accept, most single-subject researchers take the
position that an ambiguous graph sho uld usually result in accept in g the n ull hypothesis
(that the intervention does not have a meaningful effect; M::~ttaini, 1996). There are times
when statistical and quasi-statistical methods as discussed below may help to sort out
such situations, but in many such cases, any effect found is likely to be small and may not
be of clinical or social significance. Another option is to extend the phase in which
ambiguous data appear (Bloo m et al. , 2006), which may produce further stability. Bloom
ct aL (2006) discuss a number of types of data ambiguity and additional strategies that
may be useful. Often, however, refinement and strengthening of the intervention is what
is required, although there ce rtainly are limes when findin g a possible but uncertain effect
may be a step in the search for a more powerfu l one.

lsmes. Unfortunately, there are other problems with visual analysis beyond the ambiguity
asso ciated with weak effects. While many initially believed that visual a nalysis was a con-
servative approach in which Type I (false alarm) errors were unlikely, studies have indi-
cated that this is not always the case (DeProspc ro & Cohen, 1979; Matyas & Greenwood,
1990). Matyas and Greenwood ( 1990) found false alarm levels ranging from 16% to 84%
with graphs designed to incorp orate varying effec t sizes, random variations, and degrees
of autocorrelation. (Autocorrelation is discu ssed later. ) Furthermore, DeProspero and
Cohen (1979) found on ly modest agreement among raters with some ki nds of graphs,
despite using a sample of reviewers familiar with SSR. These findings certainly suggest
accep ting only clear and convincing graphic evidence and have led to increasing use of
sta tistical and q uasi-statistical methods to bolster vis ual analyses. Nonetheless, visual
analysis, conservatively handled, remains central to determination of socially and clini –
ca ll y significant effects.

Stati stical and Quasi-Statistical Analysis

The use of statistical methods has been and remains controversial in SSR. There has long
been some concern that relying on statistical methods would be a distraction since it
might result in finding many small and socia lly insignificant effects (Baer, 1977). Most
single-system studies also involve on ly modest numbers of data points, which can severely
limit the applicability and power of many types of statistical analyses. In this section, I
briefly introduce several common approaches; space does not permit ful l development,
and interested readers should th erefore refer to the origi nal sources for further informa-
tion. Before looking at the options, however, th e issue of autocorrelation in single-system
data must be examined.

Autocorrelation. One serious and unfortunately apparently co mmon issue in the stat ist ical
analysis of single-system data is what is termed autocorrelation. Most sta tistical techniques
used in SSR assume that the data points are independent, and no single observation can be
predicted from previous data points. Unfort un ately, in some cases in SSR, values al one
point in time ca n to some extent be predicted by earlier values; they an: autocorrelated (or
serially dependent). There has been considerable debate over the past several decades as to
the extent and severity of the autocorrelation problem (Bloom et al., 2006; Huitcma, 1988;
Matyas & Greenwood, 1990) . Autocorrelation can increase both Type T (false positive) and
Type II (false negative) erro rs. For this reason, statistical met hods that take autocorrelation
into account (autoregress ive integrated movin g averages [ARIMAJ, for example) or that
tra nsform the data to remove it are preferred when possible. Bloom et al. (2006) provide

CHAPTER 14 • S I NGLE-SYSTEM RESEARCH 265

statistical techniques to test for autocorrelation as well as transformations to reduce or
remove autocorrelation from a data set. With smaller data sets, autocorrelation may well be
present but may be difficult or impossible to identify or adjust for. In such cases, reliance
on visual analysis may be best, but it is impo rtant to note that autocorrelation is often not
evident to the eye and can affect the precision of visual analysis as well. Bloom et al. suggest
that autocorrelation can be reduced by maximizing the interval between measurement
points and by using the most valid and reliable measures possible.

Statistical Process Control Charts and Variations. Statistical process control (SPC) charts
are widely used in manufacturing and business settings to monitor whether variations in
an ongo ing, stable process are occurring. Determinations about what ch anges in the data
should be regarded as reflecting real changes are based on decision rules that have a sta-
tistical rationale. A number of types of SPC charts have been developed for different types
or da ta and research situations (Orme & Cox, 2001) . SPC methods are generally useful
even with small numb ers of observations (although more are always better) and, with rig-
orous decision rules, are relatively robust even when autocorrelation or nonnormal dis –
tributi ons are present Nugent et al. (2001 ) have developed variations of SPC charts that
take into account such situations as small numbers of data points, nonlinear trends in
base li ne phase data, or when the first or last data point in a phase is a serious outlier. The
anal yses they suggest, although based on rigorous mathematical proofs, are easy to per-
form with a simple graph and a ruler and, like other SPC-based methods, rely on simple
decision rules (e.g., “two of three data points falling more then two sigma units away from
the extended trend li ne signify significant change:’ p. 127).

A RIMA Analysis. ARIMA procedures correct for several possible issues in single-system
and time-series analyses (Gottm an, 1981; Kugent et al., 1998). These include autocorrela-
tion, including periodicity (cyclical and seasonal effects), movin g average processes, and
viola Lions of the assumption of stationarity. The major obstacles to routine use of ARJ1\1.A
procedures are its complexity and the requirement for large numbers (at least do zens ) of
data poi n ts in each phase. The only social work study using this procedure of which I am
aware is ugent et al. (1 998), but particularly in policy analysis, there is considerable
potential for the use of ARIMA and other related time -series an alysis methods such as
segmented regression analysis.

Other Statistical Techniques. The proportion/frequency approach uses the binomi al dis-
tribution to compare data in the inter vention phase Lo the typica l pattern during base-
line. If an adequate number of observations during intervention fall outside th e typical
baselin e range (in the desired dir ection), the change can be regarded as statistically sig-
nifi ca nt. The conservative dual-criteria (CDC) approach , described by Bloom et al.
(2006), is a related appro ach in which results are viewed as stati sti cally significa nt if and
only if they fall above both an adjusted m ean line and an adju sted regression line calcu-
lated from baseline data. The CDC approac h appea rs to be somewhat more robust in the
face of some types of autocorrelation than many other approaches. Under som e circu m-
sta nces, standard statistical methods such as t tests and ch i-square tests can be used to
test the differences between phases in SSR studi es, although such use remains co ntrover-
sia l and ca n be complica ted by autocorrelation and the shapes of data di stributions,
amon g other concerns. Recent developments in the application of randomization tests
using software tha t is freely available on line (Ninness et al., 2002) arc a major advan ce, as
the shape of underly ing distributions and the small number of observations are not
issues with such analyses.

266 PART II • QUANTITATIVE APPROACHES: T YP£5 01 S TUOIES

Effect Sizes and Meta-Analysis. Measures of th e magnitude of intervention effect, effect
sizes (ESs), are increasingly imp ortant in SSR. ESs are the most common metrics used to
compare and aggregate studies in meta-analyses, but they may also be useful for judging
and repo rting the power of interventions within individual st udies (Parker & Hagan-
Burke, 2007). The calculatio n of ES in a standa rd AR design is stra ightforward. The mean
value for the baseline phase A is subtra cted from the mean for the intervention phase B,
an d the result is divided by the standard deviation of the baseline val ues:

(There arc dozens of ways that ES can be calculated, but this is the most common.) A value
o f .2 is considered a small effect, .5 a medium effect, and .8 a large effect using this fo rmula
(Cohen, 1988). This meas ure assumes no meaningful trend in the d ata, which is no t always
the case; other related approaches can be applied in such circums tances (Bloom et al.,
2006). Variations are needed in multiphasc des igns; for example, A-B ES across partici-
pants in a multiple baseli ne study can be averaged to obtain an overall effect size.

Parker and Hagen -Burke (2007) provide several argumen ts for the use of ES in SSR,
includin g journal publication expectations and th e widesp read use of ES in the evidence-
based p ractice movement. Furtherm ore, while recognizing that v isual analysis is li kely to
remain the primary approach in SS R, Parker and Hagan-Burke suggest that ES can
strengthen analysis in four ways: by increasing objectivity, increasing precision, permit-
ting the calculation of confidence intervals, and increasing general credibility in term s of
p rofessional standards. Still, the use of ES in SSR is relatively new, there are many unre-
solved technical concerns (Bloom ct al. , 2006) , an d , most significan tly, patterns in the
data, which are often a nalytically important, are lost in reducing results to a single index.

One approach to examining generalizability in SSR is meta-analysis, in which the
results of multiple studies are essentially pooled to increase the n u mber of cases (and thus
statistical power) and the breadth of contextual factors being examined. Meta-analysis has
bccorn e common in group experimental research, an d in terest in meta-ana lysis fo r SSR is
growing (Busk & Serlin, 1992 ). The number of meta-analytic studies of SSR has been
small, however, so the final utility of the approach remains to be seen.

A number of serious issu es associated with meta -analysis should not be minimized
(Fischer, 1990). While there arc statis tical and methodological issues and limitations, per-
haps the most serious concerns are (a) the loss of information re lated to pa tterns of
change over time (Salzberg, Strain, & Baer, 1987) and (b) th e exten t to which the inter-
ventions applied across studies arc in fact substantially identical. The first involves a
trade-off between statistical and analytic power, recalling that manipulations of va riables
and observa tion of patterns of resu lts over time arc the hear t of a natural science
approach to behavior change. Standard m eta -analytic tech niques red uce th e results of
each study to a single effect size, th us losing much of the information provi ded by the
study. With rega rd to the second concern, any experienced social worker can tell you that
two professionals might use the same words with the same client and have wildly differ-
ent results depend in g on history, skills, nonverbal and paraverba l behavi ors, levels of
warmth and authenticity, context, and a wide ra nge of other factors . The contrast with
giving the same pill in multipl e sites is considera ble (although contextual fac to rs no
do ubt arc important there, too ). In practice research, achieving consistency across cases is
difficult, and across studies is even more so. Nonclhclcss, the potential for meta-analytic
methods in SSR is currently unknown and should continue to be explored.

CHAPTER 14 • SINGLE·5YSTEM RESEA~CH 267

SingleMSystem Research and
the EvidenceMBased Practice Movement

Across a II of the helping professions, demands for evidence-based and evidence-informed
practice are strong and growing, and this is as it should be. When we know what is likely
to help, there is a clear ethical obligation to use that information in work with clienb and
cl ien t gro ups. Requirements for the use of evidence-based pra ctices are increasinglv built
into legislation and policy. In marry cases, randomized clinical trials (H.Cfs, a verv rigor-
ous type of group experimental design) are regarded as the “gold standard” for determ in-
ing which practices should be regarded as evidence based. A strong case can be made,
however, that RCTs arc sometimes not the best approach for validating best practices and
that rigorous SSR offers an alternative for doing so under certain circumstances Horner
et al., 2005; Odom et al. , 2005; Weisz & H awley, 2002).

As elaborated b y H orner and colleagues (20 05 ), five standards should be used to
dete rmin e that a practice has been docume nted as evidence-based using SSR methods:
“(a) the practice is operationally defined, (b) lhe context in which the practice is to be
used is defined, (c) rhe practice is implemented w ith fidelity, (d) results from smgle-
subject research document the practice to be functionally related to change in depen-
dent measures, and (e) the experimental effects are replicated across a sufficient
nu mber of studies, researchers, and participants to allow confidence in the findings”
(pp. 175-176) . Most of these standards have been discussed in some depth earlier m
this ch apter. The spec ific s t·andard discussed by Horner et al. for replica tion is ”orth
particular note, however: “A practice may be considered evidence based when a a
minimum of five single-subject studies that meet minimally acceptable methodolog1c3.1
criteria and document experimental control have been published in peer-re’ -~ •. :d
journals, (b) the studies are conducted by at least three different researchers across ar
least three different geographic locations, and (c) the five or more studies include a lOral
of at least 20 p ar ticipants” (p. 176).

Recalling some of th e limitations of group comparison experiments outlined early in
this chapter, the importance of rigorous single-system designs for determining ”ilich
practices should be regarded as evidence based is clear. The flexibility and lower c~ts of
SSR may produce more information about those best practices much more quickh· than
RCTs and other group experiments under circumstances where what is known is limited.

SingleMSystem Research: An Example

A strong example of valuable SSR in social work is the article “Use of Single-S,·stem
Research to Evaluate the Effectiveness of Cognitive-Behavioural Treatment of
Schizophrenia” by William Dradshaw (2003) . This study used a mu ltiple-baseline-across-
seven -subjects d esign to test the effects of cogn itive-b ehavioral treatment (CBT J O\’er a
36 month period on standardized measures of (a) psychosocial functio n ing, (b severity of
symptoms, and (c) attainment of self-selected goals. There has been only very limited
work, especially in the United States, on the value of CBT for persons with schizophrenia,
and the studies of short-term CBT intervention have dem onstrated at best weak effects.
The researcher hypothes ized tha t longer term intervention wjth this long-term condition
(at least 6 months of impairment arc req uired fo r the diagnosis to be made) would pro –
duce s ubstantial effects.

268 PART II • QUANTITATIVE APPROACHES: TYPF.S OF STUDIES

Method

Ten adult cl ients were randomly selected from the ongoing caseload of a community
mental health center; diagnoses were confirmed by psychiatric and social work diagnosti-
cians with 100% agreement . Three of the 10 dropped out early in the study. Average
length of illness for the remain ing 7 was 11 years, and 6 of the 7 were on psychotropic
medication throughout the study. Of these clients, 2 were assigned to 6-month baselines,
2 to 9-month baselines, and the remaining 3 to 12-month baselines. During baseline con-
ditions, quarterly monitoring by a psychiatrist and a case manager was provided (stan-
dard treatment). At the end of th e baseline period for each, weekly CBT was initiated. The
treatment package included efforts to strengthen the therapeutic alliance (in part through
the use of solution-focused techniques), education about the illness, and standard cognitive-
behavioral interventions, inclu ding activity scheduling, exercise, relaxat ion exercises, and
cognitive restructuring, among others. Quarterly evaluations on the measures of func-
tioning and symptoms were independently conducted by the clinician (the researcher)
and case m anagers for each client, with very close ag reement.

Ana lysis
Study data were analyzed both visually and statistically. Quarterly scores for psychosocial
functioning and psychiatric symptoms were plotted on standard graphs, from which pat-
terns could be identified (see Figure 14.9 for one example).

The visual results were compelling, with a few anomalies as would be expected work-
ing with persons with severe mental illness. The data were tested for autocorrelation;
none was found in the baseline data, but autocorrelation was found in all seven cases in
the intervention phase data. As a result, a first differencing transformation (Bloom et al. ,
2006) was used to remove the effects of autocorrelation, and t tests were then conducted.

Results
All of the investigator’s hypotheses were supported. As shown in Figure 14.9, all clients
showed statistically significant impr ovements in psychosocial functioning, with an aver-
age effect size of2.96 (a statistically large effect, generally reflecting improvement of about
one and one-half levels on the 7-point scale). All showed statistically significant decreases
in symptoms, with an average effect size of -2.19 (again a large effect). Every client also
made greater tha n expected progress on self-selected goals from pretest to posttest, using
standardized GAS scores. Visual analysis showed clear improvements for every client on
each of the scales following flat baselines. Recognizing the limitations of this study being
conducted by a single investigator in a single agency with a relatively homogeneous pop-
ulation, th e researcher approp riately called for systematic replications by others.

Conclusion

Single-system research is, in many ways, an ideal methodology for social work research.
Social worker practit ioners commonly work with one system at a time, under often
unique contextual conditions, and SSR methods have the potential to make such work
much more powerful. Every client’s world and behavioral history are different, and unlike
in many types of m edicin e, for example, standardized treatments are unlikely to be widely
applicable without individualization. While social science methods, group experiments,

CIIAPTrR 14 • SINGLC· SvsnM ReseARCH 269

26

24
22

t/) 20
~

18

0 16
CJ 14

CJ) 12

Client 1

o-o-o

6 12 18

24 30 36 42 48

Baseline Intervention

Client 2
}-()

26
24
22

~ 20
CIJ 18
….. 16
8 14
en 12

en 10
u. 8
a: 6

4
2
0

0 6 12

Baseline
24
22

~ 20
CIJ 18
…. 16
8 14
en 12

~ ·~~~~~. ”
I I I

26 l

en 10
u. 8
a: 6
4
2
0

0

6 12 18 24 30 36 42 48 0 6 12

Baseline Intervention Baseline

Client3
26
24
22

~ 20
CIJ 18
….. 16
8 14
(J) 12
C/) 10

~ ~t
I 0 I I I I

6 12 18 24 30 36 42 48 0 6 12
Baseline Intervention Baseline

Client 4

~~ ~ r-o-o
tJ) 20 1 j:5
j ~~ : – /’rJ
C/) 10 : r..)

18
18
18

~ lr:±
I I I I I I I I I I 1

0 6 12 18 24 30 36 42 48

Baseline Intervention

Client 5

24 30 36 42 48
Intervention

Client 6

24 30 36 42 48

Interventio n

Client 7

I I I I I I

24 30 36 42 48
Intervention

Figu re 14.9 Measures of psychosocial functioning ror the seven clients included in the Bradshaw {2003) study described in

the text.

SOURCE: © British Journal of Social Work; reprinted with permission.

270 PART II • QuANTITATIVE APPROACHES: T YPES Of STUDttS

and other forms of scholarship have important niches in social work research, perhaps no
ot her strategy is therefore at the same time as practical and as powerfu l for determining
what helps and what hurts, under what condit ions, as is single-system research groun ded
in nalural science strategies.

Not e

1. Withdrawal designs arc sometimes called reversal designs; technically, however, in a reversal
design, Lhe intervention is app lied during the reversal phase in ways that attempt to make the
behavior of interest worse; there are few if any circumstances when social work researchers would
have occasion to use this approach.

References

Allday, R. A., & Pakurar, K. (2007). Effects of teacher greetings on student on -task behavior. Journal
of Applied Behavior Analysis, 40, 317-320.

Azrin, N.H., Naster, B. J., & Jones, R. (1973). Rec iproci ty counseling: A rapid learning-based pro-
cedure fo r ma r ital cou nseling. Behaviour Research & Therapy, 11, 365-382.

Baer, D. M. ( 1977) . Rev iewer’s comm ent: Just beca use it’s reliab le doesn’t mean that you ca n use it.
journal of Applied Behavior Analysis, I 0, 117- J 19.

Barlow, D. H., & Hersen, M. ( 1984). Single case experimental desigrts: Strategies for studying behavior
change (2nd ed.). New York: Allyn & Bacon.

Big! an, A., Ary, D., & Wagenaar, A. C. (2000). The value of interrupted time-series experiments for
community intervention research. Prevention Science, 1, 31-49.

13loonl, M ., Fischer,]., & Or me, ). (2 006) . Evaluating practice: G11irielines for the accountable profes-
siona l (5th ed .). l3oston: Allyn & Bacon.

13oyce, T. E., & Hineline, P. N. (2002). lnterteaching: A strategy for enhancing the user- friend liness
of behavioral arrangements in the college classroom. The Behavior Atwlyst, 25, 215-226.

Bradshaw, W. (2003) . Use of single-system research to evaluate the effectiveness of cognitive-behav-
ioural treatment of schizoph renia. Rritislt journal of Social Work, 33, 885-889.

Browning, R. M. (1967) . A same-subject design for simultaneous comparison of three reinforce-
ment contingencies. Behaviour Research and Therapy, 5, 237- 243.

Busk, P. L., & Serlin, R. C . ( 1992). Meta-a nalysis for single-case research. In T. R. Kra Locbwill &
). R. Levin (Eds.), Sinf(le-case reseaTch design and analysis: New directions for psychology and
education (pp. 187-212). llillsdale, NJ: Lawrence Erlbaum.

Cohen, J. (1988). Statisticnl power analysis for the behavioral sciences (2nd ed. ). Hillsdale, NJ:
Lawrence Erlbaum.

Coulton, C. (2005). The place of community in social work practice research: Conceptual and
m ethodological developments. Social Work Research, 29(2), 73-86.

Dan iels, A. C. (2000). Bringing out the best in people. New York: McGraw-Hill.
Davis, R. L., Ninness, C., Rumph, R., McCuller, G., Stahl, K., Ward, T., e t al. (2008). Funct ional

assessment of self-initialed maladaptive behavio rs: A case study. Behavior and Social issue;, 17,
66-85.

DeProspero, A., & Cohen, S. (1979) . Inconsistent visual analyses of intrasubject data. journal of
Applied Behavior Atwlysis, 12, 573-579.

Dimidjian, S., Hollon, S. D., Dobson, K. $ ., Schmaling, K. B., Kohlenberg, R. ]. , Addis, M. P.., et al.
(2006) . Randomized trial of behavioral act iva t ion, cognitive therapy, and antidepressanlmcd-
ica tion in the acute treatment of adLuls wi Lh major depress ion . journal of ConstJLting and
Clinical Psychology, 74, 658-670.

CHAPTER 14 • S INGLE·SYSTEM RESEARCH 271

Embry, D. D. (2004). Community-based prevention using simple, low-cost, evidence-based kernels
and behavior vaccines. Joumal of Community Psycholo!(y, 32, 575-591.

Embry, D. D., Riglan, A., Galloway, D., McDaniels, R., Nunez, N ., Dahl, M . J., et al. (2007) .
Evaluation of reward and reminder visits to reduce tobacco sales to young people: A multiple-base-
Line across two states. Unpublished manuscript.

Epstein, N. H., Bald win , L. M., & Bishop, D. S. (1983). Th e McMaster Family Asscssmcnl Device.
Journal of Marital and Family Therapy, 9, 171-1 80.

Fischer, J. (1990). Problems and issues in meta-ana lysis. In L. Videka-Sherman & W. J. Reid ( Eds.),
Advances in clinical social work research (pp. 297- 325). Washington, DC: NASW Press.

Fisher, K., & Hardie, R. ). (2002). Goal attainment scaling in evaluati ng a multidi sciplinary pain
management program. Clinical Rehabilitation, 16, 871-877.

Goldstein, A. P., Glick, B., & Gibbs, ). C. ( 1998). Aggression replacement training: A comprehensive
intervention for aggressive youth (2nd ed. ). Champaign, IL: Research Press.

Gottm an, J. (1981) . Time series analysis: A comprehensive introduction for social sciemists. New York:
Cambridge University Press.

Horner, R. H ., Carr, E. G., Halle, J. , McGee, G ., Odom, S., & Wolery, M . (2005) . The use of single-
subject research to identify evidence -based pracLicc in special education. Except.ional Children,
71, 165-1 79.

Hudson, W. W. (1982). Tlte clinical measurement package: A field manual. Homewood, l L: Dorsey.
Iluitema, B. E. (1988). Autocorrelation: 10 years of confusion. Behavioral Assessment, 10,253-294.
Jason, L. A., Braciszewski, )., Olson, B. D., & Ferrari,). R. (2005). Increasing the number of mutual

help recovery homes for substance abusers: Effects of government policy and funding assis-
tance. Behavior mul Social Issues, 14, 71-79.

Johnston, }. NL, & Pennypacker, H . $. (1993). Readings for “Strategies and tactics of behavioral
research” (2nd ed.). Hillsdale, NJ: Lawrence l~rlbaum.

Jurbergs, N., Palcic, }., & Kelley, M. L. (2007). School-home notes with and without response cost:
Incr easing attent ion and academic performance in low-income ch ild ren with attention-
defLciVhypcractivity d isorder. School Psychology Quarterly, 22, 358-379.

Kerlinger, F. N. ( 1986) . Four1dations of behavioral research (3 rd cd.). New York: Holt, Rinehart &
Winston.

Kiresuk, T. J., Smith, A., & Cardillo,}. E. (1994). Goal attainment scaling: Applications, theory, and
measurement. Hillsdale, ~J: Lawrence Erlbaum.

Kopp, J. (1993) . Self-observation: An empowerment strategy in assessment. In J. B. Rauch (Ed. ),
Assessment: A sourcebook for social work practice (pp. 255- 268). Milwaukee, WI : Families
Internatio nal.

Lee, V. L. (1988). Beyond behaviorism. Hillsdale, NJ : Lawrence Erlbaum.
Lovaas, 0 . I. (1987). Behavioral treatment and n or ma l edu cational and intellectual func tioning in

yotmg autistic children. Journal of Consulting and Clinical Psychology, 55, 3-Y .
Mattaini, M.A. (1996). The abuse and neglect of single-case designs. Research on Social Work

Practice, 6, 83-90.
Mattaini, M . A. (1997) . Clinical intervention with it1dividuals. Washington, DC: NASW Press.
Mattaini, M.A. (2006). Will cultural analysis become a science? Behavior and Social Issues, 15, 68-80.
Mattaini, M.A . (2007) . Monitori ng social work practice. In M . A. Mattaini & C. T. Lowery (Eds.) ,

Foundations of social work practice (4th cd., pp. 147- 167). Washington, DC: ASW Press.
Mattaini, .M.A., & Lowery, C. T. (2007) . Perspectives for practice. In M.A. Mattaini & C. T. Lowery

(Eds. ), Foundations of social work practice (4th ed., pp. 3 1-62). Washington, DC: NASW Press.
Mattaini, M.A., McGowan, B. G., & vVilliams, G. (1996). Child maltreatment. In M . A. Mattaini

& B. A. Thyer (l:::ds .) , Finding solutions to social problems: Behavioral strategies for change
(pp. 223-266). Wash ington, DC: American Psychologica l Association.

Matyas, T. A., & Greenwood, K. M. ( 1990) . Visual analysis o f sin gle-case tim e series: Effects of va ri-
ability, serial dependence, and magnitude of interve ntion e ffects. Journal of Applied Behavior
Analysis, 23, 341-351.

McEachin, J. ). , Smith, T., & Lovaas, 0. I. (1993). Long-term outcome for children with autism who
received early intensive behavioural treatment. American Journal on Mental Retardation, 97,
359- 372.

272 PART II • QUANTITAIIVE APP ROACH ES: TYPES OF STUOIES

Moore, K., Delaney, J. A., & Dixon, M. R. (2007) . Using indices of hap piness to e_xami ne the infl u-
ence of enviroumcntal enhancements for nursing home residents with Alzheimer’s disease.
journal of Applied Behavior Analysis, 40, 541 – 544.

Newton, M. (2002). Evaluating the outcome of counselling in primary care using a goal attainmen t
scale. Counselling Psychology Quarterly, 15, 85-89.

Nin ness, C., Newton, R., Saxon, J., Ru mph, R., Bradfield, A., Harrison, C., et al. (2002 ). Sma ll group
statistics: A Monte Carlo comparison of parametric and randomization tests . Behavior and
Social Issues, 12, 53-63.

Nugent, W. R., Bruley, C., & Allen, P. ( 1998). The effects of aggressio n replacement training on anti –
social behav io r in a runaway shelter. Research orr Social Work Practice, 8, 637- 656.

Nugent, W. R., Siepper t, ]. D., & Hudson, W. W. (2001) . Practice evaluation fo r the 21st century.
Belmont, CA: Brooks/Cole .

Odom, S. L., Brautlinger, E., Gersten, R., Horner, R. H., Thompson, B., & Harris, K. R. {2005) .
Rt’search in special education: Scientific methods and evidence-based practices. Exceptional
Children, 71, 137- 148.

Orme, J. G., & Cox, M. E. {2 001). Analy1.ing single-subject design data us i11 g stat istical proce~s con-
trol ch;1 rts. Social Work Research, 25, J. LS-127.

Parker, R., & H agan-Burke, S. (2007). Useful effect size inlerprctations fo r sin gle case research.
Behavior Therapy, 38, 95- 105.

Parsonson, B. S., & Baer, D. M. (1978). The analysis and presentation of graphic data. In T. R.
KraLochwill (Ed.), Si11gle subject research: Strategies for evaluating change (pp. 101-165) . New
Yo rk: Aca demic Press.

P utnam, R. F. , Handler, M. W., Ramirez-Piall, C. M., & Lu isell i, ). K. (2003) . Improving student bus-
riding behavior through a whole-school intervention. Journal of Applied Behavior Analysis, 36,
583- 590.

Rubin, A., & Knox, K. S. (1996). Data analysis problems in single-case evaluation: Issues for
research on social wo rk p ractice. Research on Social Work Practice, 6, 40-65 .

Sal7.berg, C . L., Stra in, P. S., & Baer, D. M. (1 987). Meta-a nalysis for single subject research: When
docs it clar ify, when does it obscu re? Remedial and Special Education, 8, 43- 48.

Saville, B. K., Zinn, T. E., & Elliott, M. P. (2005) . Interteaching vs. traditional methods of instruc-
ti on: A preliminary analysis. Teac11i11g of Psychology, 32, 161-163.

Saville, B. K., Zinn, T. E., eef, N . A., Van Norman, R., & Ferreri, S. J. (2006) . A comparison of
interteachi.ng and lectu re in the college classroom. Journal of Applied Behavior Analysis, 39,49-61.

Serna, I.. A., Schumaker, J. B., Sherman, J. A. , & Sheldon, J. ll. (1991). In- home gen eralizaLion of
socia l interactions in families of adolescen ts with behavior prob l em~. journal of Applied
Behavior Analysis, 24, 733-746.

Stuart, R. B. ( 1980). Jlelping couples change. New York: Guilford.
Swenson, C. C., Hcnggeler, S. W., Taylor, l. S., & Addison, 0 . ‘vV. (2005). Multisystemic therapy and

neighborhood pa rtnerships. New York: Guil ford.
Thycr, B. A. (2001). Gllid elines for eval uati ng ou tcome studies o n social wo rk p ractice. Research on

Social Work Practice, 1, 76- 91 .
Thyer, B. A., & .Myers, L. L. (2007). A social worker’s guide to evaluating practice outcomes.

Alexandria, VA : Council on Social Work Education.
Tuckman, B. W . (1988). The scaling of mood. Hducational and Psychological Measurement, 48, 4 19-427.
Weisz, J. It, & Hawley, K. M. (2002) . Proccduml and coding man ual for iden tification of beneficial

treatments. Washington, UC: American Psychological Association.
Wolf, M. M. {1978). Social validity: The case for subjective measurement or how applied behavior

analysis is finding its heart. ]ounml of Applied Behavior Analysis, 11, 202-2 14.

CHAPTER 14 • SINGLE-SYSTEM RtSEAPC 273

USEFUL WEB SITES

http:/ I academic.mu.edu/sppa/slong/sppa261-6
A nice PowerPoint presenting the distinctions between case studies and single-case researcr ces ;”s

http:/ /en.wikipedia.org/wiki/Single-subject_research
Wikipedia entry in single-subject resea rch

http:/ /www.abainternational.org/
Web site for the Association for Behavior Analysis-International, the leading professional ara ·esea·cn
organi7ation that focuses on research using single-case designs.

http:/ /seab.envmed.rochester.edu/Jaba/
Web site of the journal of Applied Behavior Analysis, the oldest journal on applied research ;1 ss~es
of social significance, with a rocus on single-case designs.

DISCUSSION QUESTIONS

1. How are si ngle-system research designs an improvement over traditional c..ase studies’

2. When may the usc of single-sys tem research designs be preferred over usi ng group resea rch aes ~~s1

3. How c.a n th e use of one or more baseline ph ases enhance t he internal va lidity of sing e-sys:e-
research?

4. How can external validity be established for findings obtained from single-system researcn”‘

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

A Preliminary Examination to Identify the Presence of Quality Indicators in Single-subject Research
Tankersley, Melody;Cook, Bryan G;Cook, Lysandra
Education & Treatment of Children; Nov 2008; 31, 4; ProQuest Central
pg. 523

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Rubric Detail

 

A rubric lists grading criteria that instructors use to evaluate student work. Your instructor linked a rubric to this item and made it available to you. Select Grid View or List View to change the rubric’s layout.

Content

Top of Form

Name: SOCW_6311_Week3_Assignment_Rubric

·

Grid View

·

List View

 Show Descriptions  Show Feedback

Responsiveness to Directions–

Levels of Achievement:

Excellent 17.28 (27.00%) – 19.2 (30.00%)

Paper fully addresses all instruction prompts.

Good 15.36 (24.00%) – 17.088 (26.70%)

Paper addresses most of the instruction prompts; however, one or more prompts may have been insufficiently addressed.

Fair 13.44 (21.00%) – 15.168 (23.70%)

Paper addresses some of the instructions prompts, but may have missed several prompts or did not sufficiently address the majority of prompts.

Poor 0 (0.00%) – 13.248 (20.70%)

Paper does not address the majority of instruction prompts and/or insufficiently addresses all instruction prompts.

Content–

Levels of Achievement:
Excellent 17.28 (27.00%) – 19.2 (30.00%)

Paper demonstrates an excellent understanding of all of the concepts and key points presented in the text(s) and Learning Resources. Paper provides significant detail including multiple relevant examples, evidence from the readings and other sources, and discerning ideas. Paper demonstrates exemplary critical thought.

Good 15.36 (24.00%) – 17.088 (26.70%)

Paper demonstrates a good understanding of most of the concepts and key points presented in the text(s) and Learning Resources. Paper includes moderate detail, evidence from the readings, and discerning ideas. Paper demonstrates good critical thought.

Fair 13.44 (21.00%) – 15.168 (23.70%)

Paper demonstrates a fair understanding of the concepts and key points as presented in the text(s) and Learning Resources. Paper may be lacking in detail and specificity and/or may not include sufficient pertinent examples or provide sufficient evidence from the readings. Paper demonstrates some critical thought.

Poor 0 (0.00%) – 13.248 (20.70%)

Paper demonstrates poor understanding of the concepts and key points of the text(s) and Learning Resources. Paper is missing detail and specificity and/or does not include any pertinent examples or provide sufficient evidence from the readings. Paper demonstrates poor critical thought.

Competency: Evaluate Practice with Individuals, Families, Groups, Organizations, and Communities – Skills–

Levels of Achievement:

Excellent 2.88 (4.50%) – 3.2 (5.00%)

Student demonstrates an excellent ability to create a single system design. Student describes a well thought out evaluation plan that accurately identifies target outcomes and appropriate measures.

Good 2.56 (4.00%) – 2.848 (4.45%)

Student demonstrates a good ability to create a single system design. Student describes a thought out evaluation plan that accurately identifies target outcomes and appropriate measures.

Fair 2.24 (3.50%) – 2.528 (3.95%)

Student demonstrates an emerging ability to create a single system design. Student describes an evaluation plan that identifies target outcomes and ways to measure; however, target outcomes may not be appropriate for intervention and/or measurement may not align with outcomes.

Poor 0 (0.00%) – 2.208 (3.45%)

Student demonstrates little or no ability to create a single system design. Student evaluation plan is not sufficiently developed and does not accurately identify target outcomes or appropriate measures.

Competency: Evaluate Practice with Individuals, Families, Groups, Organizations, and Communities – Cognitive and Affective Processes–

Levels of Achievement:
Excellent 2.88 (4.50%) – 3.2 (5.00%)

Student demonstrates excellent critical thought related to development of single-systems design. Student provides an excellent explanation of how measurement can assist with effective intervention.

Good 2.56 (4.00%) – 2.848 (4.45%)

Student demonstrates good critical thought related to development of single-systems design. Student provides a good explanation of how measurement can assist with effective intervention.

Fair 2.24 (3.50%) – 2.528 (3.95%)

Student demonstrates emerging critical thought related to development of single-systems design. Student provides some explanation of how measurement can assist with effective intervention. Thought may need further development or specificity.

Poor 0 (0.00%) – 2.208 (3.45%)

Student demonstrates limited or no critical thought related to development of single-systems design. Student provides little if any explanation of how measurement can assist with effective intervention.

Writing–

Levels of Achievement:
Excellent 17.28 (27.00%) – 19.2 (30.00%)

Paper is well organized, uses scholarly tone, follows APA style, uses original writing and proper paraphrasing, contains very few or no writing and/or spelling errors, and is fully consistent with graduate level writing style. Paper contains multiple, appropriate and exemplary sources expected/required for the assignment.

Good 15.36 (24.00%) – 17.088 (26.70%)

Paper is mostly consistent with graduate level writing style. Paper may have some small or infrequent organization, scholarly tone, or APA style issues, and/or may contain a few writing and spelling errors, and/or somewhat less than the expected number of or type of sources.

Fair 13.44 (21.00%) – 15.168 (23.70%)

Paper is somewhat below graduate level writing style, with multiple smaller or a few major problems. Paper may be lacking in organization, scholarly tone, APA style, and/or contain many writing and/or spelling errors, or shows moderate reliance on quoting vs. original writing and paraphrasing. Paper may contain inferior resources (number or quality).

Poor 0 (0.00%) – 13.248 (20.70%)

Paper is well below graduate level writing style expectations for organization, scholarly tone, APA style, and writing, or relies excessively on quoting. Paper may contain few or no quality resources.

Bottom of Form

Exit

5

Title of the Paper in Full

Student Name

Program Name or Degree Name (e.g., Master of Science in Nursing), Walden University

COURSE XXX: Title of Course

Instructor Name

Month XX, 202X

Title of the Paper in Full

When you download a Walden template, the first action is to save it locally to your computer using the Save As command. You will want to make sure that you are moving the document to a new location on your computer when you Save As. Documents should not be maintained in the Download folder. When you are ready to use the template for a paper, you will open the template, and immediately Save As giving the document a new name. Once you have renamed the document, you can safely use the Save command for saving the document as you write.

APA format and college-level writing can be difficult for many students returning to school after several years away from academia. An abstract is typically not required for the short papers that undergraduates write, so an abstract page is not included in this template but can be added if needed (you can find a version with the abstract on our

General Templates page

). The references page shows some sample references for sources such as webpages, books, journal articles, and course videos. Below follows some advice for writing your paper and adhering to APA standards.

Your

introductory paragraph

and every paragraph that follows should have a minimum of three sentences, with an average of four to five and no more than seven sentences. The last sentence of your opening paragraph should be the

thesis statement

, which summarizes the purpose of the assignment and how you intend to address it. The sentences preceding your thesis statement should simply provide background that contextualizes your thesis for readers.

Each paragraph should begin with a

topic sentence

, which summarizes the paragraph’s main argument or idea. Also, the last sentence (or lead-out) of each paragraph should be a transition statement that connects what you discussed in that paragraph and what is to come in the next one. In the middle of each paragraph, you should cover something with your own thoughts, and in a separate sentence, provide a sentence paraphrased from a source with an in-text citation at the end. The source may back up your opinion, or give an alternative viewpoint, or even simply provide some background. See the Writing Center’s

webpage on paragraphs

for further advice.

Try to use

paraphrases

instead of

direct quotations

when possible, only quoting when the meaning of the idea or excerpt would be lost if you paraphrase it. All information from sources, whether paraphrased or quoted, need to be cited.

Citations

should be in parenthetical or narrative citation format and include the last name(s) of the author(s) or name of the organization that published the material, year of publication, and a page or paragraph number for quoted material. Each source cited in your paper, unless it is a

personal communication

, should include a corresponding

reference list

entry. If no date is given for a source, write “n.d.” in place of the year (it stands for “no date”). This sentence does not come from a source, but I will end it with an in-text citation so you can see an example (Author, n.d.). If you have more than two sentences of information from one source, ensure that it is clear to the reader where the information in each sentence is from, using citations or other cue phrases (e.g. The authors also stated…). For more information and examples, see APA 7, Section 8.

Many websites that information comes from are suspect in terms of factual and unbiased information. In a nutshell, avoid using Wikipedia, About.com, Answers.com, or similar websites, as the Writing Center explains in the

“Why You Shouldn’t Wiki” blog post

. Though some .com sites are acceptable, most undergraduates have trouble identifying whether they can be trusted, so an easy guideline to follow is to avoid them. Websites ending in .gov, .net, .edu, .org, and so forth are typically more trustworthy than a .com source. See the Library’s

Evaluating Resources webpage

for more tips on finding reliable sources.

The body of your paper should have a couple of paragraphs or more. Your

conclusion paragraph

should briefly summarize the main points of your paper and place the paper in the context of social change. While your conclusion should not introduce new topics, you may suggest a direction for future research. Generally, you should not write anything in the conclusion that would require you to cite a source; instead, the conclusion should represent only your own thoughts and analysis.

Make sure you follow directions, and we recommend you download the grading rubric from Doc Sharing that breaks down how an assignment is graded. A one-page essay means a full one page of writing and does not include elements such as references, tables or figures, or the title page. The requirement of using two sources in your assignment directions does not mean simply providing two in-text citations for the same source; the sources themselves must be different. Lastly, if you have any questions about writing a paper or properly citing sources, feel free to contact the Writing Center at

writingsupport@waldenu.edu

or through our

Live Chat Hours

.

References

(Note that the following references are intended as examples only. These entries illustrate different types of references but are not cited in the text of this template. In your paper, be sure every reference entry matches a citation, and every citation refers to an item in the reference list.)

American Counseling Association. (n.d.). About us.

https://www.counseling.org/about-us/about-aca

Anderson, M. (2018). Getting consistent with consequences. Educational Leadership, 76(1), 26-33.

Bach, D., & Blake, D. J. (2016). Frame or get framed: The critical role of issue framing in nonmarket management. California Management Review, 58(3), 66-87.

https://doi.org/10.1525/cmr.2016.58.3.66

Burgess, R. (2019). Rethinking global health: Frameworks of Power. Routledge.​

Herbst-Damm, K. L., & Kulik, J. A. (2005). Volunteer support, marital status, and the survival times of terminally ill patients. Health Psychology, 24(2), 225–229.

https://doi.org/10.1037/0278-6133.24.2.225

Johnson, P. (2003). Art: A new history. HarperCollins.

https://doi.org/10.1037.0000136-000

Lindley, L. C., & Slayter, E. M. (2018). Prior trauma exposure and serious illness at end of life: A national study of children in the U.S. foster care system from 2005 to 2015. Journal of Pain and Symptom Management, 56(3), 309–317.

https://doi.org/10.1016/j.jpainsymman.2018.06.001

Osman, M. A. (2016, December 15). 5 do’s and don’ts for staying motivated. Mayo Clinic.

https://www.mayoclinic.org/healthy-lifestyle/adult-health/in-depth/5-dos-and-donts-for-staying-motivated/art-20270835

Sue, D. W., & Sue, D. (2016). Counseling the culturally diverse: Theory and practice (7th ed.). Wiley.

Walden University Library. (n.d.). Anatomy of a research article [Video].

https://academicguides.waldenu.edu/library/instructionalmedia/tutorials#s-lg-box-7955524

Walden University Writing Center. (n.d.). Writing literature reviews in your graduate coursework [Webinar].

https://academicguides.waldenu.edu/writingcenter/webinars/graduate#s-lg-box-18447417

World Health Organization. (2018, March). Questions and answers on immunization and vaccine safety.

https://www.who.int/features/qa/84/en/

What Will You Get?

We provide professional writing services to help you score straight A’s by submitting custom written assignments that mirror your guidelines.

Premium Quality

Get result-oriented writing and never worry about grades anymore. We follow the highest quality standards to make sure that you get perfect assignments.

Experienced Writers

Our writers have experience in dealing with papers of every educational level. You can surely rely on the expertise of our qualified professionals.

On-Time Delivery

Your deadline is our threshold for success and we take it very seriously. We make sure you receive your papers before your predefined time.

24/7 Customer Support

Someone from our customer support team is always here to respond to your questions. So, hit us up if you have got any ambiguity or concern.

Complete Confidentiality

Sit back and relax while we help you out with writing your papers. We have an ultimate policy for keeping your personal and order-related details a secret.

Authentic Sources

We assure you that your document will be thoroughly checked for plagiarism and grammatical errors as we use highly authentic and licit sources.

Moneyback Guarantee

Still reluctant about placing an order? Our 100% Moneyback Guarantee backs you up on rare occasions where you aren’t satisfied with the writing.

Order Tracking

You don’t have to wait for an update for hours; you can track the progress of your order any time you want. We share the status after each step.

image

Areas of Expertise

Although you can leverage our expertise for any writing task, we have a knack for creating flawless papers for the following document types.

Areas of Expertise

Although you can leverage our expertise for any writing task, we have a knack for creating flawless papers for the following document types.

image

Trusted Partner of 9650+ Students for Writing

From brainstorming your paper's outline to perfecting its grammar, we perform every step carefully to make your paper worthy of A grade.

Preferred Writer

Hire your preferred writer anytime. Simply specify if you want your preferred expert to write your paper and we’ll make that happen.

Grammar Check Report

Get an elaborate and authentic grammar check report with your work to have the grammar goodness sealed in your document.

One Page Summary

You can purchase this feature if you want our writers to sum up your paper in the form of a concise and well-articulated summary.

Plagiarism Report

You don’t have to worry about plagiarism anymore. Get a plagiarism report to certify the uniqueness of your work.

Free Features $66FREE

  • Most Qualified Writer $10FREE
  • Plagiarism Scan Report $10FREE
  • Unlimited Revisions $08FREE
  • Paper Formatting $05FREE
  • Cover Page $05FREE
  • Referencing & Bibliography $10FREE
  • Dedicated User Area $08FREE
  • 24/7 Order Tracking $05FREE
  • Periodic Email Alerts $05FREE
image

Our Services

Join us for the best experience while seeking writing assistance in your college life. A good grade is all you need to boost up your academic excellence and we are all about it.

  • On-time Delivery
  • 24/7 Order Tracking
  • Access to Authentic Sources
Academic Writing

We create perfect papers according to the guidelines.

Professional Editing

We seamlessly edit out errors from your papers.

Thorough Proofreading

We thoroughly read your final draft to identify errors.

image

Delegate Your Challenging Writing Tasks to Experienced Professionals

Work with ultimate peace of mind because we ensure that your academic work is our responsibility and your grades are a top concern for us!

Check Out Our Sample Work

Dedication. Quality. Commitment. Punctuality

Categories
All samples
Essay (any type)
Essay (any type)
The Value of a Nursing Degree
Undergrad. (yrs 3-4)
Nursing
2
View this sample

It May Not Be Much, but It’s Honest Work!

Here is what we have achieved so far. These numbers are evidence that we go the extra mile to make your college journey successful.

0+

Happy Clients

0+

Words Written This Week

0+

Ongoing Orders

0%

Customer Satisfaction Rate
image

Process as Fine as Brewed Coffee

We have the most intuitive and minimalistic process so that you can easily place an order. Just follow a few steps to unlock success.

See How We Helped 9000+ Students Achieve Success

image

We Analyze Your Problem and Offer Customized Writing

We understand your guidelines first before delivering any writing service. You can discuss your writing needs and we will have them evaluated by our dedicated team.

  • Clear elicitation of your requirements.
  • Customized writing as per your needs.

We Mirror Your Guidelines to Deliver Quality Services

We write your papers in a standardized way. We complete your work in such a way that it turns out to be a perfect description of your guidelines.

  • Proactive analysis of your writing.
  • Active communication to understand requirements.
image
image

We Handle Your Writing Tasks to Ensure Excellent Grades

We promise you excellent grades and academic excellence that you always longed for. Our writers stay in touch with you via email.

  • Thorough research and analysis for every order.
  • Deliverance of reliable writing service to improve your grades.
image

Disclaimer: All Original and Custom Writing Services are solely for the purpose of your understanding and information.

Copyrights © 2025 Assignment Research Writer. All Rights Reserved.

Follow Us |

Order your essay today and save 30% with the discount code Happy