I’m working on a social science discussion question and need a sample draft to help me learn.

discuss the concept of EBP in social work practice based on the readings in your own words.
Clin Soc Work J (2011) 39:328–333
DOI 10.1007/s10615-011-0358-x
ORIGINAL PAPER
Evidence-Based Practices Do Not Exist
Bruce A. Thyer • Monica Pignotti
Published online: 12 August 2011
Springer Science+Business Media, LLC 2011
Abstract The original process model of evidence-based
practice (EBP) is described, and contrasted with the
empirically supported treatments (EST) initiative which
designated selected interventions as meeting some evidentiary benchmark (e.g., supported by two-well-designed
randomized controlled trials). EBP does not utilize lists of
ESTs, and designating a given psychotherapy as empirically supported is actually antithetical to the EBP decisionmaking process. Much of the resistance to EBP within
social work may be attributable to confusion between EBP
as it was originally conceived as a mutual decision-making
process occurring between the clinician and the client, and
the promulgation of lists of EST and the subsequent urging
that social workers select their psychotherapies from such
lists. The latter is not scientifically justifiable, nor does it
taken into account other variables crucial to EBP, such as
professional values, clinical expertise, client preferences
and values, and available resources. EBP as it was originally conceived has much to add to the practice of clinical
social work.
Keywords Evidence-based practice Clinical social
work Empirically-supported treatments EST EBP
Introduction
As two social workers who support the process of evidencebased practice (EBP) as outlined by the originators of this
model, proposed in the first edition of Evidence-based
B. A. Thyer (&) M. Pignotti
College of Social Work, Florida State University,
296 Champions Way, Tallahassee, FL 32306, USA
e-mail: [email protected]
123
Medicine (Sackett et al. 1997) and continued through to the
4th edition of this primary sourcebook (Straus et al. 2010), the
title of this paper may seem surprising. It is actually very
accurate, and to the extent it seems surprising to readers
reflects the extent that EBP is often misunderstood within the
human service community, including clinical social workers.
It is always best to become familiar with any given modelÂ’s
tenets by reviewing its original source documents. Here is how
EBP is defined by its developers:
Evidence-based medicine (EBM) requires the integration of the best research evidence with our clinical
expertise, and our patientÂ’s unique values and
circumstances.
By best research evidence we mean clinically relevant research, often from the basic sciences, but
especially from patient centered clinical research into
the accuracy and precision of Â… tests Â… and the
efficacy and safety of therapeutic, rehabilitative and
preventive regimensÂ…
By clinical expertise we mean the ability to use our
clinical skills and past experiences to rapidly identify
each patientÂ’s unique health state and diagnosis, their
individual risks and benefits of potential interventions, and their personal values and expectations.
By patient values we mean the unique preferences,
concerns and expectations each patient brings to a
clinical encounter and which must be integrated into
clinical decisions if they are to serve the patient.
When these three elements are integrated, clinicians
and patients form a diagnostic and therapeutic alliance which optimizes clinical outcomes and quality
of life. (Straus et al. 2010, p. 1, emphases in original)
We note the evident congruence between the above
description of EBP with clinical social work values and
Clin Soc Work J (2011) 39:328–333
329
practices. Having the practitioner seek out the best scientific evidence about potentially helpful assessment methods
and approaches to intervention is certainly congruent with
various social work ethical codes, as in.
Social workers should base practice on recognized
knowledge, including empirically-based knowledge,
relevant to social work and social work ethics
(National Association of Social Workers 2008,
4.01[c])
The reader will also note the importance of a clientÂ’s
preferences, values, concerns and expectations, and unique
state in this foundational description of evidence-based
practice. In many ways, EBP is a very holistic approach to
practice, and differs from prior models by its explicit
incorporation of nonscientific considerations into the
decision-making process. As illustrated in Fig. 1, EBP is
the congruence of all of these elements into making a
clinical decision, with no one factor having priority over
another (Fig. 1).
One controversial point consists of what the best
research evidence consists of. EBP embraces all forms of
evidence, and certainly recognizes that in many areas of
practice the scientific evidentiary foundations may be quite
weak. But this does not mean that the model has no
applicability. EBP relies on the best evidence, and yes,
there is a preference of certain forms of evidence over
others. But this preference is based upon commonly
accepted standards of science, standards which have proven their usefulness is drawing legitimate conclusions.
Table 1 lists, in approximate order, the types of evidence
which may be taken into account when seeking the best
research evidence relating to selecting an intervention.
Other types of evidence would be considered if the issue
dealt with selecting an appropriate diagnostic or assessment measure (e.g., factor analytic studies), or attempting
Best available
research
evidence
Environment &
organizational
con text
—— —
Decision-Making
— Client/Population
characteristics
state, needs, ‘
values, &
preferences
__
Resources,
induding
practitioner
expertise
Fig. 1 The essential components of the process model of evidencebased practice
to determine the etiology of a particular psychosocial
problem (e.g., epidemiological studies). And within each
form of evidence, studies may vary in quality. Usually, for
example, a randomized controlled trial (RCT) is seen as
stronger if it contains an appropriate number of participants, relative to one which, although published, had so
few participants as to be statistically underpowered. A
published study which included credible fidelity checks to
ensure that the interventions were delivered competently,
and without contamination between groups supposedly
receiving differing treatments, would be seen as stronger
than a similar study lacking such fidelity checks. And a
study whose findings have been independently replicated
and held up would be seen as more convincing than a study
which has never been put to the test again, or had been put
to the test and whose findings were not replicated.
In the EBP practice model, the findings of the higher
forms of evidence, if competently conducted, are usually
given greater weight or credibility than forms of evidence
lower on the hierarchy. This is because these higher forms
are more capable of reducing bias and uncertainty in
arriving at conclusions. It is not uncommon for the positive
findings obtained from an uncontrolled pretest–posttest
group outcome study, e.g., one assessing clientsÂ’ psychosocial functioning before and immediately after treatment,
which can be diagrammed as an O1–X–O2 design, to be
overturned when a comparison group of untreated clients is
added to a replication study on the same intervention. Such
a comparison group can partially control for confounds
such as spontaneous improvements, the passage of time,
historical events, or regression to the mean (which may
occur when clients seek help when their problems are at
their worse). It is also common for positive findings
obtained through quasi-experiments, studies using naturally occurring groups of clients, to be overturned when the
study is replicated using more experimental methods, such
as randomly assigning clients to differing treatment conditions. At the low end of the hierarchy, the opinions of
highly experienced practitioners would usually be given
greater credibility than those of the novice clinician; and
treatment recommendations derived from legitimate to
widely accepted social work theory (e.g., attachment theory) seen as more useful than some novel and bizarre
theory involving completely new concepts and metaphysical forces (e.g., ReichÂ’s Orgone theory). Conclusions
derived from a series of replicated narrative case studies
would accorded greater weight than a single such report,
and so forth. Practitioners of all theoretical and epistemological persuasions weigh and value evidence, using differing standards, but the consistent use of mainstream
scientific methodology affords the field some systematic
rationale for making these necessary and inevitable judgments about the credibility and weight of evidence.
123
330
Table 1 Hierarchical forms of evidence that may bear on selecting
an intervention (evidence higher on the list is usually seen as more
credible than lower forms of evidence)
Generally seen as stronger forms of evidence
Systematic reviews published by reputable organizations
Randomized N = 1 controlled clinical trials
Large scale multi-site randomized controlled clinical trials
Individual randomized controlled clinical trial
Large scale multi-site quasi-experimental studies
Individual quasi-experimental study
Replicated pre-experimental outcome studies
Individual pre-experimental outcome study
Single case experimental designs
Correlational studies
Narrative case studies
Expert clinical opinion
Credible theory
Opinions of professional colleagues
Generally seen as weaker forms of evidence
Preference is given to evidence published in high quality peerreviewed journals
One common misconception of EBP is that it requires
the existence of randomized controlled trials, meta-analyses, or systematic reviews, in order to be successfully
undertaken. This is far from the truth. The reality is that
EBP asks the practitioner to locate the best available evidence, and to evaluate its findings and potential applicability, in order to help inform practice decisions. For
example, if there are no systematic reviews (SRs), metaanalyses, or RCTs, then one would seek out any pertinent
quasi-experiments. If none could be found, then see if there
are any pre-experimental studies, and so on. In EBP there is
always evidence to help inform practice. And sometimes
evidence lower on the scale found in Table 1 is indeed the
best evidence, and should be critically evaluated for its
potential applicability.
An important and often overlooked part of the EBP
process of critical evaluation is that in addition to seeking
out best confirmatory evidence, it is equally important to
focus on the best disconfirmatory evidence, including
evidence that a particular intervention may have been
shown to produce harmful effects in some clients (Gambrill
2011; Lilienfeld 2007). Many of the evidence hierarchies
are designed in such a way that they seek out confirmatory
evidence to the exclusion of an equally important part of
the critical appraisal process, disconfirmatory evidence
(Gambrill 2006). For example, a critical analysis of a
systematic review on interventions for foster children
found that one of the interventions classified as supported
and acceptable, also had evidence of harm that was being
123
Clin Soc Work J (2011) 39:328–333
overlooked (Pignotti and Mercer 2007). The EBP critical
appraisal process goes beyond simply looking at a hierarchy of confirmatory evidence and critical examines all
evidence available. Given the ethical mandate, first do no
harm, and incorporating the importance of considering
client values, even lower level evidence of harm needs to
be taken into serious consideration.
Another example of failure to examine disconfirmatory
evidence was found in a review of energy therapies
(Feinstein 2008) which did not take into account randomized controlled studies that provided disconfirmatory evidence (Pignotti and Thyer 2009). Here, even though these
practices have not been shown to do harm, confirmatory
studies conducted by those who had a vested interest in the
interventions need to be tempered with an examination of
disconfirmatory studies that showed that the effects are
likely do to placebo. This is all part of the process that may
easily get lost if we simply pick from a list of ‘‘supported’’
treatments rather than fully engaging in the evidence based
practice process.
Obviously there is little chance of finding perfect one–
one correspondence between your clientÂ’s characteristics
and unique circumstances, and published studies involving
clients similar to yours, and EBP does not require this.
Again, simply look for the best available evidence
including any disconfirmatory evidence. Suppose you have
a client who is Asian and is seeking help in overcoming
compulsive hoarding, yet when you read the empirical
literature on the treatment of hoarding you find that the
available RCTs have only involved Caucasian and African
American clients. Does the fact that you find nothing
involving the treatment of Asians who hoard mean you
have no evidence to rely on? Obviously not but in this
particular case the best available evidence on the treatment
of hoarding involving Caucasian and African American
clients would be an excellent place to start. And even if
there are obvious demographic similarities between your
client and those reported in the published literature, there
will remain uncounted differences—religion, socio-economic status, prior treatment histories, etc. which preclude
any simplistic extrapolation from the published evidence to
your unique client. But this does not mean you start from
scratch. Ideally, as research on psychosocial treatments
evolves, the field can identify robust therapies which are
effective across a wide array of client groups service providers (e.g., LCSWs, psychologists, counselors, etc.), and
whose effects are not so evanescent that they collapse when
applied under slightly different conditions than those they
originally were shown to be helpful.
One can find various efforts in the social work literature
that have attempted to provide some benchmarks or standards which one can use in evaluating scientific studies
(e.g., Thyer 1989, 1991, 2002), but there are now two
Clin Soc Work J (2011) 39:328–333
recent contributions to this literature which deserve particular consideration. The first of these are the Journal
Article Reporting Standards (JARS) found in the sixth
edition of the Publication Manual of the American Psychological Association (APA 2010, pp. 247–253). There
are four separate JARS, each building upon the prior ones.
The first is information recommended for inclusion in all
research reports involving the collection of data; the second
building upon the first by adding reporting standards for
studies involving an experimental manipulation or intervention; the third adds additional reporting criteria for
studies using random or non-random assignment methods
to allocate participants to comparison or control groups;
and the fourth relates to recommended standards for
reporting the results of a meta-analysis. By placing the
imprimatur of the American Psychological Association
behind such standards, it is hoped that a greater degree of
consistency in reporting the details of research studies will
facilitate the flow of information and result in the publication of more comprehensively reported and transparent
research.
A related influential development has been the creation
of the Consolidated Standards of Reporting Trials (Consort
Standards, see http://www.consort-statement.org/home/).
The CONSORT Statement is a template to aid in the write
up and critical analysis of randomized controlled trials. It
consists of a 25 item checklist for a reader (or author) to
use, indicating the page number of a paper where information pertaining to critical reporting elements of an RCT
is included. It addresses the standard elements and content
of an outcome study, e.g., Title and Abstract, Introduction
Methods, Results, and Discussion. A more focused extension of the CONSORT Statement pertinent to experiments
involving nonpharmacological (e.g., psychotherapy, clinical social work, counselling) interventions has also been
developed (Boutron et al. 2008). Both the JARS and the
CONSORT Guidelines are benchmarks which can be used
to evaluate the scientific adequacy of studies on the outcomes of social work practice, using group research
designs. They possess the virtues of transparency and are
an admirable effort to control for the effects of experimenter/therapist bias and extra-study confounds as much as
possible to approximate natureÂ’s truth regarding the efficacy of psychosocial treatments. Their epistemological
foundations are clear and, to many, compelling, consisting
of an amalgam of positivism, determinism, operationism,
falsificationism, empiricism, parsimony, realism, and scientific skepticism (see Thyer 2010). While these principles
may be endlessly debated in the social work literature by
our philosophical nihilists and sophists, in fact they are
widely accepted in the scientific community and put to
good use by serious research methodologists in the service
of evaluating the outcomes of practice. The results are
331
clear—a slow accumulation of credible knowledge about
empirically supported treatments for a wide array of mental
health diagnoses, other psychosocial disorders, and problematic interpersonal dysfunctions.
EBP is a process aimed at helping clinicians and clients
make important practice decisions. It is not a listing of
treatments that have met some evidentiary standard. The
five steps of this EBP process are as follows:
1.
2.
3.
4.
5.
Step 1—converting the need for information (about
prevention, diagnosis, therapy, causation, etc.) into an
answerable question.
Step 2—tracking down the best evidence with which to
answer that question.
Step 3—critically appraising that evidence for its
validity (closeness to the truth) impact (size of the
effect), and applicability (usefulness in our clinical
practice).
Step 4—Integrate the critical appraisal with our
clinical expertise and with our clientÂ’s uniqueÂ…values
and circumstances.
Step 5—evaluating our effectiveness and efficiency in
executing steps 1–4, and seeking ways to improve
them both for next time. (Steps 1–5 quoted from
Sackett et al. 1997, pp. 3–4).
If this five-step process of EBP is unfamiliar to the reader,
s/he is encouraged to seek out and read the original primary
source documents describing this model and outlined by its
developers (e.g., Straus et al. 2010) and not rely on third or
fourth-hand interpretations which have successively deviated from the original description. This is a rampant
problem in the social work and other literatures which
critique EBP. For example, the first article on this topic that
was ever published in the British Journal of Social Work
(Webb 2001), failed to cite a single primary source
document about evidence based practice. The closest it
came was to cite a webpage from a social work academic
program in England. As a result, this now widely-cited
article was rife with mischaracterizations and straw-person
portrayals, and did a serious disservice to subsequent
informed discussion on the topic.
Evidence-Based Practice is Not a Medical Model
Although, EBP originated in the field of medicine, its
generic process has become widely adopted in many nonmedical fields. We contend that this approach is not a
medical model because nowhere does it suggest that human
problems have a biological etiology (although many medical illnesses do); nowhere is it contended that human
problems are best addressed through biologically-based
123
332
interventions, such as drugs or surgery; and nowhere does
it assert that medical doctors should be the primary providers of care. These three elements are the defining features of the medical model. The reality is that EBP is
atheoretical with respect to the causes of problems, the
proper basis of intervention, or who should deliver services. EBP has been adopted not only within many health
care disciplines, but also many decidedly non-medical
fields such as policy practice, community intervention,
supervision, management and administration, psychology,
and public administration. It is a decidedly scientific
model, but this is a well-justified approach to many areas of
social work practice. The conceptual origins of a model of
practice, in this case within the discipline of medicine,
have little bearing on the pragmatic application of an
approach in other areas. It is either helpful, or it is not.
Evidence-Based Practice is Not a Collection
of Empirically-Supported Treatments
One of the more common misconceptions of EBP, and a
major source of resistance to it, if the notion that it consists
of clinicians diagnosing a client using formal mental health
criteria, then tracking down psychosocial or other interventions that have met some established standards of
research evidence. The expectation is that the practitioner
is expected to then apply this selected empirically supported treatment (EST) to the client, and if this is not done
the clinician is somehow not adhering to so-called ‘best
practicesÂ’ and may be seen as ethically suspect or even
engaging in malpractice. For example, social workers
Mullen and Streiner (2004, p. 113) define EBPs as ‘‘any
practice that has been established as effective through
scientific research according to a clear set of explicit criteria.Â’Â’ This confusion was also evident in a national survey
of American social work faculty, wherein Rubin and Parrish (2007) found that about 23% of respondents endorsed
the definition of EBP as ‘‘… a way to designate certain
interventions as empirically supported under certain conditions’’ (p. 116), about 24% as ‘‘a process that includes
locating and appraising evidence as a part of practice
decisionsÂ’Â’ (p. 116), and 46% defined EBP as both of the
above definitions. In other words, 25% of American
social work faculty identified the definition of EBP correctly (the second choice above).
In part this confusion arises through the conflation of
EBP with a parallel initiative undertaken by elements of
the American Psychological Association (APA) known as
the Empirically Supported Treatments initiative. This, like
EBP, began in the early 1990s and consisted of two processes. The first was to come to some consensus regarding
how much evidence should be considered necessary in
123
Clin Soc Work J (2011) 39:328–333
order to designate a given form of psychotherapy as
‘empirically supported’. Once this benchmark was arrived
at, researchers scoured the literature to determine which
treatments met these standards, and lists of these so-called
empirically-supported treatments (ESTs) began to appear
in the literature. These evidentiary standards and lists of
ESTs (now called Research-supported Treatments) can be
located via a website maintained by the Division of Clinical Psychology of the APA (http://www.psychology.
sunysb.edu/eklonsky-/division12/). One can click on a
given mental illness diagnosis and a description of the
disorder and its research supported treatments (and supportive citations) will appear, along with information on
how to get trained in that method. Or one can click on a
given form of psychotherapy and read about its level of
research support. Thyer (2010) review the history of social
workÂ’s empirical clinical practice model, psychologyÂ’s
empirically supported treatments initiative, and medicineÂ’s
evidence-based practice movement and note the similarities (a strong reliance on scientific evidence to guide the
selection of treatment) and differences (the greater
sophistication of the EBP process model). Nowhere does
EBP provide lists of treatments. Instead, individual clinicians are urged to consult and appraise the research evidence themselves, and integrate this information with other
crucial elements of this model, including client preferences, ethical considerations, oneÂ’s own clinical expertise,
and the availability of resources. These latter elements are
conspicuously absent from the lists of ESTs, and this
lacunae is one source of confusion and resistance to EBP,
since EBP is so widely seen as referring to ESTs. In fact,
nowhere does EBP provide lists of endorsed or approved
therapies, or statements to the effect of, for example,
‘‘Beck’s Cognitive Therapy is an Evidence-based Practice
for Clients with Major DepressionÂ’Â’ Any statement like this
would virtually ignore two-third of the EBP process model.
If a client simply refused to engage in cognitive therapy, a
practitioner could still remain true to EBP by offering
alternative treatments. If a given treatment were seen as
unethical, even if strongly supported by research evidence,
EBP permits, indeed encourages, allowing ethical considerations to overrule scientific considerations. Suppose a
client with major depression also had an intellectual disability, and was unable to productively engage in BeckÂ’s
Cognitive Therapy? Would it be appropriate to say that
Cognitive Therapy was an evidence-based practice for this
client? Obviously not.
The most explicit guidance EBP provides in this regard
is the commissioning and publication of what are called
systematic reviews (SRs) by the Cochrane and Campbell
Collaborations, with the former focusing on health care
(including mental health and substance abuse) and the
latter on social welfare, education, and criminal justice.
Clin Soc Work J (2011) 39:328–333
Table 2 Common acronyms related to evidence-based practice
CONSORT—Consolidated standards of reporting trials, a
standardized template of information which should always be
included in reporting the design and results of a RCT
EBP—Evidence-based practice
EST—Empirically supported treatment (replaced by RST)
EVT—Empirically validated treatment (replaced by EST)
JARS—Journal article reported standards, now a part of the
publication guidelines of the American psychological
association
RCT—Randomized controlled trial
RST—Research supported treatment, the latest modification of
what began as empirically validated treatments. These
designations emerged independently from the original EBP
process model, and are not used in EBP
SR—Systematic review
These SRs (see Littell et al. 2008) are prepared by collaborative teams of clinical and methodological experts
summarizing the best available evidence about various
interventions related to particular problems (as well as SRs
on assessment methods and the etiology of problems). The
end statements are NOT recommendations about what
clinicians should do or not do, but rather careful summaries
of what the research evidence has to say about the interventionÂ’s effectiveness. It is up to the individual practitioner to assimilate this information, along with other
sources of evidence, and in conjunction with the client,
decide on what to do. And sometimes doing nothing
(watchful waiting) can be the best option. It would be
antithetical, according to EBP, to say that a given treatment
should be used with clients with a particular problem, and
in real EBP, this is not done. Hence the title of this article.
‘‘Evidence-based Practices Do Not Exist’’. There is the
process model of evidence-based practice, which is more of
a verb. There are no scientifically justifiable lists of evidence-based practices (as a noun). One cannot decide how
to treat a client only by considering the scientific evidence.
The other factors comprising the EBP model are also
essential to consider. This is why it is incorrect to assert
that any given treatment is an evidence-based practice.
We have included a brief table (see Table 2) of the
acronyms used in this paper, common to discussions of
evidence-based practice, so that the reader may more
readily understand their differences and commonalities
(Table 2).
References
American Psychological Association. (2010). Publication manual of
the American psychological association (6th ed.). Washington,
DC: Author.
333
Boutron, I., Moher, D., Altman, D. G., Schulz, K. F., & Ravuad, P.
(2008). Methods and processes of the CONSORT group:
Example of an extension for trials assessing nonpharmacologic
treatments. Annals of Internal Medicine, 148, w-60–w-66.
Feinstein, D. (2008). Energy psychology: A review of the preliminary
evidence. Psychotherapy: Research, Practice, Training, 45,
199–213.
Gambrill, E. (2006). Evidence-based practice and policy: Choices
ahead. Research on Social Work Practice, 16, 338–357.
Gambrill, E. (2011). Evidence-based practice and the ethics of
discretion. Journal of Social Work, 11, 26–48.
Lilienfeld, S. O. (2007). Psychological treatments that cause harm.
Perspectives on Psychological Science, 2, 53–70.
Littell, J. H., Corcoran, J., & Pillai, V. (2008). Systematic reviews and
meta-analysis. NY: Oxford University Press.
Mullen, E., & Streiner, D. L. (2004). The evidence for and against
evidence-based practice. Brief Treatment and Crisis Intervention, 4, 111–121.
National Association of Social Workers. (2008). Code of ethics. Downloaded from http://www.socialworkers.org/pubs/code/code.asp on
June 1, 2011.
Pignotti, M., & Mercer, J. (2007). Holding therapy and dyadic
developmental psychotherapy are not supported and acceptable
practices: A systematic research synthesis revisited. Research on
Social Work Practice, 17, 513–519.
Pignotti, M., & Thyer, B. A. (2009). Some comments on ‘‘energy
psychology: A review of the evidenceÂ’Â’: Premature conclusions
based on incomplete evidence? Psychotherapy Theory,
Research, Training, Practice, 46, 257–261.
Rubin, A., & Parrish, D. (2007). Views of evidence-based practice
among faculty in master of social work programs: A national
survey. Research on Social Work Practice, 17, 110–122.
Sackett, D. L., Straus, S. E., Richardson, W. S., Rosenberg, W., &
Haynes, R. B. (1997). Evidence-based medicine: How to
practice and team EBM?. NY: Churchill Livingstone.
Straus, S. E., Richardson, W. S., Glasziou, P., & Haynes, R. B.
(2010). Evidence-based medicine: How to practice and team
EBM? (4th ed.). NY: Churchill-Livingstone.
Thyer, B. A. (1989). First principles of practice research. British
Journal of Social Work, 19, 309–323.
Thyer, B. A. (1991). Guidelines for evaluating outcome studies in
social work practice. Research on Social Work Practice, 1,
76–91.
Thyer, B. A. (2002). How to write up a social work outcome study for
publication? Journal of Social Work Research and Evaluation,
3(2), 215–224.
Thyer, B. A. (2010). Introductory principles of social work research.
In B. A. Thyer (Ed.), Handbook of social work research methods
(pp. 1–24). Thousand Oaks, CA: Sage.
Webb, S. (2001). Some considerations on the validity of evidencebased practice. British Journal of Social Work, 31, 57–79.
Author Biographies
Bruce A. Thyer Ph.D. is professor and former Dean with the College
of Social Work at Florida State University. Dr. Thyer earned his
Ph.D. in social work and psychology from the University of Michigan
in 1982. He is an LCSW and edits the journal Research on Social
Work Practice.
Monica Pignotti received her MSW from Fordham University in
1996 and her Ph.D. in social work from Florida State University in
2009. She is a LMSW and is the co-author, with Bruce Thyer, of
Science and Pseudoscience in Social Work (Oxford University Press,
2012).
123
Copyright of Clinical Social Work Journal is the property of Springer Science & Business Media B.V. and its
content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder’s
express written permission. However, users may print, download, or email articles for individual use.

Order Solution Now