HighBeam Research

advertisement
HighBeam Research
Title: A FALLIBILISTIC RESPONSE TO THYER'S THEORY OF THEORY-FREE
EMPIRICAL RESEARCH IN SOCIAL WORK PRACTICE.
Date: 1/1/2001; Publication: Journal of Social Work Education; Author: GOMORY, TOMI
Empirical clinical social work practice (one of the current scientific paradigms of the profession)
has too narrow an understanding of how science is done. That perspective maintains that science
is an inductive, positivist, often atheoretical process, which can lead to credible, justified
knowledge. By reviewing Bruce Thyer's article, "The Role of Theory in Research on Social
Work Practice," through a Popperian falsificationist lens the difficulties of that approach are
highlighted. An alternate approach, "Critical Rationalism," a fallibilistic noninductive trial-anderror testing of conjectured theories and methods, is described. It is contented that this approach
better advances professional knowledge and resolves the problems unresolved by the current
methodology.
"When I use a word," Humpty Dumpty said, in rather a scornful tone, "it
means just what I choose it to mean -neither more or less."
"The question is,' said Alice, "whether you can make words mean so many
different things."
"The question is," said Humpty Dumpty, "which is to be master--that's
all."
Lewis Carroll
If you want to find out anything from theoretical physicists [or social
work authorities] about the methods they use, I advise you to stick
closely
to one principle: don't listen to their words, fix your attention on their
deeds.
Albert Einstein
THE ESSENTIAL ARGUMENT that Professor Thyer makes in his article, as well as in a
previous article (Thyer, 1994a), is that theory
is neither essential nor necessarily desirable for research on social work
practice ... [and further there] are many negative consequences for our
field's current insistence that dissertations be exercises in theory
building. Rather than mandating that ... a social work dissertation must
be
either theoretically based or contribute to theory, let us recognize
nontheoretical research contributions and not accord them secondary
status.
(Thyer, 2001, p. 22)
He argues this position not as an "anti-theoretician" which he assures us, "I am not" (Thyer,
1994a, p. 148). As evidence, he tells us that he teaches "didactic courses ... devoted to human
behavior theory and its relevance to practice ... on an irregular basis" (p. 148). This may be our
first clue as to what is wrong currently with social work education. Perhaps what schools of
social work ought to be providing from the beginning to their students are critical analytic skills
and courses about various theories and interventions that review the good, the bad, and the ugly
"facts" about them rather than the "didactic," descriptive lecture courses usually offered. The
various "Foundation," "Introductory," or "Human Behavior in the Social Environment" (HBSE)
courses, because they are uncritical surveys, may give the impression that all the theories,
methods, and interventions discussed (merely because they are discussed) are effective and are
meant to be professionally sampled much as a chef might pick a favorite recipe from among
many good ones available in a cookbook. Be that as it may, Professor Thyer is therefore in his
own words, "perversely" arguing that, "when it comes to training competent BSW and MSW
[and according to his present article, PhD] practitioners, devoting time to teaching of theory is
largely a waste of time" (Thyer, 1994, p. 148). He offers a host of objections as to why in the
current and previous articles (a) social work educators can't or don't do a good job of teaching
theory; (b) most theories we teach in social work, both etiological and interventionary, are
wrong; (c) invalid theories may lead to ineffective methods; and (d) there are rival hypotheses to
the theories claimed to be the "explanations" for treatment effectiveness, and dealing with them
"needlessly" complicates outcome studies.
My response to his provocative claims is equally direct albeit conjectural. He is correct when he
asserts that we often don't teach theories effectively, or that we merely promote a "nodding
acquaintance with a few of the more prominent orientations" (Thyer, 1994, p. 149), or that we
use theories which are of little value for the actual problems at hand because they probably are
false. But Thyer is making only the trivial point that social work apparently has made very little,
if any, progress toward discovering and deploying rigorous knowledge relevant to the field. I say
trivial because this has been the difficulty that a host of social work authors, including himself,
have been discussing, hypothesizing, and writing about for the past 35 odd years and apparently
getting nowhere (Briar, 1967; Fisher, 1973a, 1973b, 1976; Gambrill, 1999; Reid, 1998; Rosen,
Proctor, & Staudt, 1999). As he aptly notes about the long-recognized crisis in social work
research, "There is a phrase to describe a crisis which has persisted for longer than one's
professional life. It is called business as usual!" (Thyer, 2001, p. 23). However, when he argues
against theory as an essential component of the intellectual work required for all aspects of
scientific research in our field he is simply wrong. Besides, his definition of social work as an
"applied" profession rather than an academic discipline (p. 14), a view for which he offers no
critical argument, is prey to the well recognized fallacy of invalid disjunction (either/or-ing).
His suggestion that a misguided pre-occupation with "theory" (its building and testing) is the
cause of the field's inability to use science to advance professional knowledge is based on a
mistaken philosophic/scientific world view. Justificationism, a heterogeneous philosophic
perspective (Popper, 1959/1968, 1979, 1983) which, in Professor Thyer's case I conjecture,
subject to his refutation, consists of a commitment to empiricism, positivism, and a hint, perhaps
unbeknownst to him, of relativism. This justificationary commitment and his particular
definitional terminology, by the nature of its assumptions, necessarily limits what he sees as
science and its alleged methods, producing the very difficulties he is attempting to overcome.
He parses definitions to suit the current fashions (at least what is fashionable among those
currently practicing empirical social work). See his Table 2 for some selected definitions of
"theory."(*) He argues that his definition of the concept "theory" is the right one when he claims
that theory differs from such things as philosophies of science, models, perspectives, paradigms,
conceptual frameworks, or lenses (Thyer, 2001, p. 17), to which I would add my term "world
views." I use the term to describe what I conjecture are Professor Thyer's theories of science and
philosophy. He states that "it is appropriate to clarify what I mean by the term `theory,' since it is
often misunderstood" (emphasis added, p. 16). Professor Thyer apparently doesn't consider the
possibility that it might be his definition that may be incorrect because it is he who
misunderstands. He offers Table 3 (p. 17) to illustrate that "these are distinct constructs" (p. 17).
This table is no illustration of his version of the definition of "theory" if he means effective
evidence. The table is a compilation of quotes selected by him because they echo and support his
claim and are just declarations by authorities, not telling arguments. One could just as easily
compile a contradictory list of quotes from other "experts." If I were in a justificationary mood
looking to round up a herd of supportive quotes, topping my list would be this one, "we must
regard all laws or theories as hypothetical or conjectural; that is as guesses" (Popper, 1979, p. 9,
emphasis in original). "World view" is a verbal construct which may, without any deleterious
empirical consequences, be also labeled a theory about science and its method, a model of
critical thinking, a conceptual perspective, a theoretical framework, an intellectual paradigm, or a
philosophy of science, contrary to Professor Thyer's assertions in his present article that a theory
and these other notions are not comparable (Thyer, 2001, p. 17). The reasons are straightforward:
First, there is no immanence in words. Words are not a priori attached to the data of the world;
they are deployed for signifying by human volition arbitrarily. Renaming phenomena does not
alter their empirical content (see Peckham, 1979). Second, all of the named constructs are
identical in their empirical explanatory status; they are all tentative, hypothetical constructs or
guesses subject to falsification.
In the sections which follow I will briefly describe Thyer's problem situation and the theoretical
framework that drives his work, offer some review and discussion of said work, and, based on
this critical analysis, suggest why he has been unable to advance and most likely may not
advance very far in his hoped-for efforts at solutions to the social work knowledge development
problem along the well trodden road of "empirical social work research and practice" as
presently conceptualized. I will conclude with some suggestions for an alternate approach with
no well-justified support, but some well-tested arguments for knowledge development and its
use.
Author's Theoretical/Methodological Perspective
Before entering on a close review of Thyer's assumptions and claims it might be helpful by way
of contrast to layout my own position. I am a Fallibilist or Critical Rationalist. Karl Popper, an
eminent philosopher of science whose earliest professional interests were in social work and
Adlerian psychology (Popper, 1974), has most comprehensively explicated this approach in the
20th century (Popper, 1959/1968, 1965/1989, 1979, 1983; see also Miller, 1994, for critiques and
responses to Critical Rationalism). Some distinguished fallibilists are Albert Einstein,
methodologist and evolutionary epistemologist D. T. Campbell (of Stanley & Campbell,
1963/1966, and Cook & Campbell, 1979, fame), Nobel Laureates F. A. Hayek (economics), Sir
John Eccles (biology), and Peter Medawar (1988) (medicine and physiology), who has said the
following:
Popper is held in the highest esteem by scientists, a number of whom
conspired a few years ago to bring it about that he was elected in to the
Fellowship of the world's oldest and most famous scientific society, the
Royal Society of London. I am very sorry to have to report that a good
many
philosophers are jealous of Popper.... I have a feeling that many
lecturers
on scientific method are oppressed by the sheer reasonableness of Popper's
philosophy (pp. 114-115).
A fallibilist sees the answer to the problem of learning from experience (the age old problem of
how knowledge can be gained) and of distinguishing better from worse knowledge (how to
choose among knowledge claims) as the application of the Socratic method of intense and
rigorous critical debate/tests by trial-and-error efforts at falsification of bold conjectures (risky
and controversial ideas or arguments) offered as solutions to real world problems. Science is
simply a subset of such a critical thinking process, using experiment and observations to "offer ...
new arguments or new criticisms" (Agassi, 1975, p. 26). We begin with a problem of interest for
which we conjecture (guess, hypothesize, theorize) a possible solution (intervention, approach,
explanation, program, procedure) which may explain, reduce, or resolve the problem. The
hypothetical solution must be (in science, empirically) testable or, more broadly, criticizable. We
should before the fact provide the conditions that, were they to occur, would falsify our
conjecture to prevent ad hoc excuses after the fact (Popper, 1968, pp. 40-42).
The falsifying test or criticism should be the most difficult (in science, the most
methodologically rigorous) one can manage. If we falsify our hypothetical solution, we eliminate
that from our (scientific) repertoire and conjecture anew, having along the way identified and
eliminated one of the myriad false notions of humankind and thereby advanced somewhat
paradoxically our knowledge of the world, not through accumulation but elimination. If our
conjectured problem solution passes a rigorous falsification effort, we are free to utilize it again,
not because we have shown that it has some measure of support (it has none), but because it
passed a difficult test which some rival solutions may not have. This, however, in no way
suggests that we have good reasons for assuming that it would, if we tested it again. We must
remain highly skeptical each and every time we apply our theories or interventions because they
are always hypothetical and subject to falsification in the future. This is so, because the method
of induction ("direct" observations summed together leading to reliable universal or general
theories uncritically applicable to the yet unmeasured or unhappened) is a nonfact, contrary to
what most social work methodologists and other social work authorities, including Professor
Thyer, believe.
Field research--the direct observation of events in progress ... is
frequently used to develop theories through observation. (Rubin & Babbie,
1989, p. 45)
In due course, explanatory theories may well emerge from data aggregated
about the effectiveness of interventions. Research on social work practice
should be inductively derived from client and societal problems not
deductively driven from explanatory theories (Thyer, 2001, p. 21).
Historically the influence of science on direct social work practice has
taken two forms. One is the use of the scientific method.... for example
gathering evidence and forming hypotheses about a client's problem. (Reid,
1998, p. 3)
The preceding quotes illustrate the limited and uncritical engagement with the issues of the
philosophy and methodology of science by the profession, including those who are engaged in
setting the knowledge development agenda for the profession. These quotes appear to make
common sense. No one would dispute that some "facts" ought be known to help solve client or
other social work-related problems and should frame and drive hypothesized efforts at solutions
and their testing rather than imposing some major "explanatory theories" arbitrarily. The claim,
however, that "objective" facts or naked theory-free data can be summed together for developing
theory is methodologically naive. The underlying assumptions of induction are incorrect. They,
like Minerva, the goddess of wisdom, are a myth, one without which science has and can
continue to progress very nicely.
Induction is Sir Francis Bacon's notion of reading the book of nature as an obvious transparency,
which he proposed as the scientific method. All one has to do is look and observe the world, and
its data will stream into human consciousness unbiased, as they are in themselves. This is
sometimes known as the commonsense theory of knowledge (Popper, 1983, pp. 60-78). If you
want to know anything about the world, simply open your eyes and ears and absorb the "true
facts" about the world because the senses are accurate conduits of such knowledge. "True"
observations provide "true facts." If you put together enough of these "true facts," you can
provide, through the sheer weight of this "true" evidence (its authenticity determined by the
consensus opinion of the certified authorities), credible, reasonable, substantial, or, in the best
case, conclusive evidence to justify the derived theory. Here I can only argue somewhat
programmatically for the falsehood of induction by discussing two aspects of its difficulties (see
Popper, 1959/1968, 1979, for a comprehensive analysis). First, the facts, contrary to the claims
of social work's Baconians portrayed above, are that there are no theory-free observations or
facts. Theory precedes biologically, logically and envelops and permeates observation. Due to
our evolutionarily developed biological cognitive equipment, we filter and restrict the flow of the
information from the "out there" to that which is important for our survival. As the philosopher
Peter Munz (1993) puts it, we are "embodied theories" about the world, which appears solid to
us, even though according to the currently best tested evidence most of this world is empty held
together by powerful forces sub-atomically.
We only "sense" a limited part of the world. We see limited portions of the color spectrum and
hear only certain sound frequencies while being blind and deaf to others, which other biological
organisms see and hear based on their own evolutionarily developed cognitive mechanisms and
survival needs. Bats have "biological sonar" to navigate at night and dogs hear supersonic sound;
we do not. All our observations are theory impregnated biologically. Observation furthermore
requires information about what to observe (we must have a theory of why the particular
observables are relevant to our problem) otherwise we could not select among the countless
possible objects available for the focus of our attention. A theory of some sort about why some
elements of our environment are to be considered data for observation or are to be named as
variables for our analysis, while others are not (often at our or more likely at our clients' peril),
always must precede observation.
For example, Professor Thyer co-edited a two volume Handbook of Empirical Social Work
Practice (Thyer & Wodarski, 1998) of which the first volume is devoted to "mental disorders"
and their empirically validated treatments. These "mental" illnesses are not identifiable by any
physiological markers as are cancers, neurosyphillis, or heart disease; "[t]here are at present no
known biological diagnostic markers for any mental illnesses" (Andreasen, 1997, p. 1586).
Mental disorder categories are simply the consensus opinions about problematic behavior, the
work of committees of psychiatric experts like Nancy Andreasen, just quoted, who make up the
Task Force on DSM-IV (American Psychiatric Association, 1994). Such opinions are theories
(attempted explanations) about what compose, and why, hypothesized groupings of behaviors
that are alleged to co-here into medical illnesses. Regardless of what we might think about such
claims, and we know that some of our own outstanding social work scholars have been highly
critical of these notions and their results (Cohen, 1989, 1994, 1997; Kirk & Kutchins, 1992;
Kutchins & Kirk, 1997), the facts are, that without theories about what constitute these disorders
(the alleged clustering of behaviors/experiential claims), no symptoms (behaviors) representing
such "disorders" could be observed nor could any group of individuals be recognized as
"schizophrenic" or "manic-depressive." Even more problematically, according to Dr. Andreasen,
America's foremost schizophrenia researcher and the chair of the DSM-IV committee on
schizophrenia, at the end of the 20th century after well over 100 years of schizophrenia research,
we don't even know what that word represents. "IA] t present the most important problem in
schizophrenia research ... [o]ur most pressing problem is ... defining what schizophrenia is"
(Andreasen, 1999b, p. 781). She further notes in a lecture given after receiving "the prestigious
Adolph Meyer Award" at the American Psychiatric Association's 1999 Institute of Psychiatric
Services, "The DSM definition may have distracted us from the real illness by overemphasizing
symptoms and even the wrong ones" (Andreasen 1999a).
The mental health field, as I write this sentence, is still just guessing about what
behaviors/symptoms to put under the various labels like schizophrenia. The DSM , now in the
new millenium in a bigger and better version known as DSM-IV-TR (American Psychiatric
Association, 2000), is of little help when it comes to empirical questions, because at least some
of the foundational "symptoms" are turning out, as Andreasen notes, to be no longer essential or
are probably the wrong ones. These difficulties, however, don't stop some other experts from
purporting to provide the empirically validated treatments for "mental disorders" (identified
perhaps by "theory free observations"), even though many of these categories have no reliability
or validity (Kirk & Kutchins, 1992) or apparent physical existence.
Professor Thyer admits, "the acute limitations of this approach [the DSM nosology] ... [t]he socalled mental disorders ... really should be labeled behavioral, affective, and intellectual
disorders to avoid an unwarranted etiological inference" (Thyer & Wodarski, 1998, p. x,
emphasis in original). This claim leaves the interesting question of what these new theoretical
entities are and what empirical research lead to their christening? Has this set of notions gone
beyond the old fashioned, everyday terms of troubling, disturbing, unpleasant, or unwanted
behavior? If Professors Thyer and Wodarski really wanted to avoid "unwarranted etiological
inference," why use the word "disorder" at all? It's the one term that is etiologically suggestive.
As DSM-W-TR tells us:
The terms mental disorder and general medical condition are used
throughout
this manual.... It should be recognized that these are merely terms of
convenience and should not be taken to imply that there is any fundamental
distinction between mental disorders and general medical conditions, that
mental disorders are unrelated to physical or biological factors or
processes. (American Psychiatric Association, 2000, p. x xxv, emphasis in
original)
We always hypothesize before we observe. It might be the case that if you are not fallibilistically
inclined, you may not be self-critically examining your own cognitive approach and may miss
identifying this step. Since we as organisms theorize (interpret) as a biological necessity in
simply confronting the environment, it is not obvious on a commonsensical level that any
filtering is going on, and this will only become available to our conscious analytic process if it's
identified as the problem situation (perhaps by critically minded social work educators).
However once being alerted to our error, we are from then on responsible for critical selfreflection.
The second difficulty with induction is the logical and factual impossibility of reasoning from the
known to the unknown, the particular to the universal, the finite to the infinite. No matter how
many times you observe apparently similar data or facts, these observations are no guarantee that
such "facts" will continue to operate in the future. Ten thousand uncontroverted, welldocumented, reliable, and valid observations of white swans in the United States may, if you
believe in the weight of evidence, offer strong support for the theory "all swans are white." The
only problem is that in Australia there are black swans. This is one observation of such a
counterexample falsifies the apparently well supported "all swans are white" theory--at least
logically. You would want to do some reliability and validity checks to eliminate potential
technical difficulties that can occur in the real world (i.e., make sure you are actually observing
black swans not ravens). This hypothetical analysis applies to all generalizations from
particulars. There is no way to know if the next observation, research test, or a yet unthought
argument will not be the counterexample to falsify very "credible," even apparently "absolutely
true," theories we hold dear. There has never been a "better-validated," "true" theory than
Newtonian mechanics, which for some 230 years was consistently validated, and all the best
minds conceded that the world was thus explained. This was so until Einstein provided the
falsification for this claim by positing a better and different explanation through his special and
general theories of relativity (e.g., for Newton gravity is a function of mass, for Einstein gravity
is a function of the curvature of space), which explained everything that Newton's mechanics did
and much that it did not. Einstein thereby demonstrated that all theories, no matter how
authoritative, even those burdened with the regal title of Natural Laws, are just hypothetical and
tentative human guesses about what is. They are all subject to refutation when strongly critiqued
and are deemed falsified if they fail exacting tests (Einstein's laws predict celestial motion more
accurately than Newton's, for example).
The consequence for science and method of these realities are the following. One cannot
generalize at all beyond the observed data. As Campbell and Stanley (1966) note in their classic,
Experimental and Quasi-Experimental Design for Research:
A caveat is in order. This caveat introduces some painful problems in the
science of induction, The problems are painful because of a recurrent
reluctance to accept Hume's truism that induction or generalization is
never fully justified logically.... Generalization al. ways turns out to
involve extrapolation into a realm not represented in one's sample....
Thus, if one has an internally valid Design 4, one has demonstrated the
effect only for those specific conditions, which the experimental and
control group have in common.... Logically we cannot generalize beyond
these limits; i.e., we cannot generalize at all. (p. 17)
Donald Campbell, the senior author of the preceding quote, is the developer and popularizer of
social research methods (Campbell & Stanley, 1966; Cook & Campbell, 1979) that are
consistently cited by social work methodologists (i.e., Rubin & Babbie, 1989, p. 264; Bloom,
Fisher, & Orme, 1995, pp. 16-18; Schutt, 1996, pp. 233-243) as authoritative. These authors
appear not to be aware of Campbell's acknowledgement of Popper's devastating refutation of
induction and consequently don't appreciate its impact on research purported to rely on it (i.e.,
the invalidation of grounded theory, a method "for converting data into effective theory,"
Strauss, 1987, p. 7, and all other approaches which claim direct, unfiltered observation as the
source of theory).(1) Campbell was a close associate of Popper, sharing with him the
development of evolutionary epistemology, one of the most fruitful contemporary intellectual
perspectives (Radnitzky & Bartley, 1987), and has said this about Popper's influence:
It is primarily through the works of Karl Popper that a natural selection
epistemology is available today.... Popper's first contribution to
evolutionary epistemology is to recognize the process for the succession
of
theories in science as a similar selection elimination [trial and error]
process.... In the process, Popper has effectively rejected the model of
passive induction.... Most noteworthy, Popper is unusual among modern
epistemologists in taking Hume's criticism of induction seriously, as more
than an embarrassment, tautology, or a definitional technicality. It is
the
logic of variation and selective elimination which has made him able to
accept Hume's contribution to analysis and to go on to describe the sense
in which ... scientific knowledge is possible (Campbell, 1987, pp. 47-51).
As Popper (1959/1968) puts it in two informative quotes:
According to my proposal, what characterizes the empirical method is its
manner of exposing to falsification, in every conceivable way, the system
to be tested. Its aim is not to save the lives of untenable systems, but,
on the contrary, to select the one which is by comparison the fittest, by
exposing them all to the fiercest struggle for survival....
How and why do we accept one theory in preference to others?
The preference is certainly not due to anything like an experiential
justification of the statements composing the theory; it is not due to a
logical reduction of the theory to experience. We choose the theory which
best holds its own in competition with other theories; the one which by
natural selection, proves itself the fittest to survive. This will be the
one which not only has hitherto stood up to the severest tests, but the
one
which is also testable in the most vigorous way. A theory [or a model or
an
intervention] is a tool which we test by applying it, and which we judge
as
to its fitness by the results of its application. (pp. 42, 108).
Fallibilists hold the truth as the regulative idea (Popper, 1979). The fact that some theories, or
interventions, can be falsified by those which are better implies that one among them might be
the best or true, although we have no way of knowing which, even if we reach it, because
induction is false (the current best may be falsified in the future). Science has no "method" as
such. It operates by bold guesses or conjectures which are then put to severe tests (which
methodologically must be able to actually test what is being asserted) in a trial-and-error fashion.
The tests serve as negative feedback for correction. A positive outcome of a test doesn't provide
additional support; it simply gives us the go-ahead to continue using it subject to further tests,
and of course we learn that yet unfalsified methods or explanations are better than those falsified.
We should choose methods which have been tested and not yet falsified rather than those which
failed. But science, because it's a human not a divine enterprise, cannot tell us what is the best
among equally well-tested unfalsified notions (theoretical or applied) including our technology,
or even between these and those not yet tested. We must continue to test and hope to gradually
eliminate the less rigorous notions by learning from our mistakes and get at better and more
effective methods, testing even those methods which have a long history of passing such tests
because if we don't, we may miss our chance to improve our knowledge or eliminate false
knowledge that may result in harm to clients due to our inductive self-satisfaction. Our choice
among "equally" well-tested theories can then be left to "clinical judgement," which consists of
such things as professional experience (good or bad), serendipity, personal whim, intuition, and
client choice, and is affected by economic and temporal constraints.
Professor Thyer's Problem Situation
Professor Thyer's tasks, broadly speaking, appear to be to provide scientific knowledge to the
profession, which he feels is necessary to "to really be of help to clients" (Thyer & Wodarski,
1998, p. 12), and as a result to improve the professional credibility of the field. His more
particular problem is to explain what that scientific knowledge consists of. These are
commendable aspirations with which I whole-heartedly concur, but they are no more than one
would expect from someone with the title of "research professor" working at a state university
under the current reigning intellectual paradigm of Science. One is hard pressed today to find any
issue, problem, or phenomenon "explained" without using the cover of science or scientific
research (often of the mental health variety) from adolescent (mostly young boys) acting-out
behavior, explained as Attention Deficit Hyperactivity Disorder, a brain disease (science as
disciplinarian, the recommended scientific treatment--medication), to presidents of the United
State being sexually indiscrete and breaking the sanctity of marriage, apparently due to sexual
addiction (science as moral excuse, scientific treatment--moral therapy by favored clergymen;
see Szasz, 1990), to large numbers of adults trying and regularly using certain psychoactive
chemicals more often than some judge appropriate, apparently due to chemical dependence
(science as pleasure modulator, scientific treatment--12-step programs and drugs; see Schaler,
2000; Szasz, 1992).(2)
Much of this I predict will turn out over time to be pseudoscience (untestable or false beliefs
claiming never the less to be scientific (see, Bunge, 1984; Munz, 1985) if and when serious
critical tests are devoted to these issues. Professor Thyer's obvious good intentions to be more
than just intellectual "concrete" poured for the construction of the proverbial road to Hades
requires a conceptual framework capable of finding the critical scientific evidence. Let's review
that next.
Professor Thyer's Theoretical Framework
He tells us that his 1998 handbook (Thyer & Wodarski, 1998), co-written with Professor
Wodarski, is the best source of his philosophic perspective: "philosophical issues are dealt with
in chapter-length form by Thyer & Wodarski, 1998" (Thyer & Myers, 1999, p. 501). Professor
Thyer identifies empiricism and positivism as the two most important elements of his views on
science, and somewhat secondary to these two philosophic perspectives he adds realism and
naturalism (Thyer & Wodarski, 1998, p. 2). The strict inductivist approach to which he is
committed can be seen by the definitions he offers for "empiricism" and "empirical" (p. 2).
Empiricism is the process of using evidence rooted in objective reality
and
gathered systematically as a basis for generating human knowledge. (Arkava
& Lane, 1983, p. 11)
Empirical--knowledge derived from observation, experience, or experiment.
(Grinnell, 1993, p. 442)
It should be clear from my previous argument against induction that this kind of empiricism is
simply not possible. All observation is biologically filtered (interpreted), and evidence must be in
a theoretical context to be testable. The primary role of data or observation is in the feedback
process or in the testing stage. Knowledge is gained when we falsify our expectations (our
conjectured solutions or explanations); corroboration of them gives us no new knowledge.
He further states that
Those who label themselves as empiricists, realists, or positivists
delimit
the scope of their inquiry to the material, the objective, to that which
has an independent existence.... Conversely, empirical research has little
to say about those aspects of the world that are wholly subjective,
immaterial, or supernatural. (Thyer & Wodarski, 1998, p. 4)
This seems to leave out such common topics of empirical research as subjective well-being, selfsatisfaction, opinion polls on a multitude of topics, and cross-cultural studies on belief systems
(including belief in magic),just to list a few that are not "empirically" researchable according to
Thyer.
This approach also suggests that science must be done in a positivist or empiricist vein, but the
history of science contradicts such a view. Some of our greatest scientists (i.e., Galileo,
Schrodinger, Bohr, and Einstein) have used "imaginary" or "thought" experiments (Popper,
1968, pp. 442-456) requiring nothing other than their intellect for self-reflective thinking to
conjecture and test their ideas. Einstein never performed any physical experiments at all; he just
made bold predictions based on his musings and let others empirically test his theories against
observables. This kind of scientific work is in the rationalist tradition, falsifying any claims that
science is limited to empirical or positivist work. Positivism, especially the Logical Positivist
sort, was quite dogmatic and authoritarian, which stands contrary to Thyer's assertion that
"Logical positivists are fully aware that many significant areas of our professional and personal
lives should not be scrutinized through the lenses of science" (Thyer, 1994b, p. 6). It proscribed
anything that could not be scientifically (empirically) "verified" as literally meaningless
"metaphysical" nonsense (see Gomory, 1997b, and Popper, 1968, pp. 27-44). Philosopher
Rudolph Carnap, one of the leaders of that movement, put it this way:
The researchers of applied logic or the theory of knowledge ... by means
of
logical analysis lead to a positive and to a negative result. The positive
result is worked out in the domain of empirical science; the various
concepts ... their formal-logical and epistemological connections are made
explicit. In the domain of metaphysics, including all philosophy of value
and normative theory, logical analysis yields the negative result that the
alleged statements in this domain are entirely meaningless.... In saying
that the so-called statements of metaphysics are meaningless, we intend
this word in its strictest sense. (Carnap, 1959, pp. 60-61)
In his philosophic chapter Professor Thyer contrasts constructivism (the world is whatever we
subjectively define it to be) (Thyer & Wodarski, 1998, p. 3) with one of his beliefs, realism,
which holds that the world exists "independent of the perceptions of human beings" (p. 3). He
identifies constructivism as a reworking of a point of view known as solipsism. In an earlier
article on just this subject, he identifies the great 19th century philosopher Arthur Schopenhauer
as the leading proponent of this view: "[I]t is worth recognizing the apparently derivative
philosophical nature of constructivism as an epistemology and describing more clearly its
evident origins in Schopenhauer's solipsism" (Thyer, 1995, p. 64). Most assuredly this would be
worth recognizing; in fact, Professor Thyer would be hailed as a discoverer of a major new
philosophical fact about Arthur Schopenhauer if that claim were correct. Schopenhauer holds
precisely the opposite view. He called solipsism, "theoretical egoism" and he had this to say
about it:
Theoretical egoism, of course, can never be refuted [much like Thyer's
favorite, realism] ... yet in philosophy it has never been positively used
otherwise than as sceptical sophism i.e. for the sake of appearance. As a
serious conviction, on the other hand, it could be found only in a
madhouse; as such it would then need not so much a refutation as a cure.
(Schopenhauer, 1969, p. 104)
This should have been clear to anyone who would have tested B. B Wolman's (1973, p. 352)
claim cited by Professor Thyer (1995, p. 63) as to the source of solipsism by comparing the
secondary interpretation with the words of the alleged originator Arthur Schopenhauer. This
Professor Thyer neglected to do. What Schopenhauer is asserting, following Kant, and which I
have also asserted in the present article (along with Popper and most biologists, at least those
who find evolutionary biology compelling), is that all information about the world must be
processed through our particular cognitive machinery and cannot be "directly" perceived (Munz,
1985, 1993). It is a theoretical interpretation of the "actual," not a mirror image. This is a very
different claim than the one asserted by Professor Thyer that Schopenhauer "held a subjective
idealism that the world is a personal fantasy" (Gregory, 1987, p. 699, as cited by Thyer, 1995, p.
63). Schopenhauer's own view (1969) is that
The world is my representation: this is a truth valid with reference to
every living and knowing being, although man alone can bring it into
reflective abstract consciousness. If he really does so, philosophical
discernment has dawned on him. It then becomes clear and certain to him
that he does not know a sun and an earth, but only an eye that sees a sun,
a hand that feels an earth. (p. 3)
This refutation of Professor Thyer's claim illustrates the difficulties with his belief system based
on justificationism and his dependence on secondary sources for support. The justificationary
effort consists of latching on to a belief (i.e., social work methods are atheoretical), and rather
than testing it rigorously by attempting to falsify it, as is demanded in fallibilism, the effort is
focused on finding support for the favored belief, theory, or intervention. It is not difficult to find
support. It can be had for the asking. The problem of course is that we accept such support all too
easily and often not from the horse's mouth but from the taxidermist. Secondary sources
uncritically accepted often lead to grave errors as in this case, attributing precisely the opposite
view to an individual than the one actually held or applying interventions falsely identified as
effective and which cause harm (Gomory, 1997a, 1999; Solomon & Draine, 1995a, 1995b). The
best use of secondary sources are for learning in a general way about some of the information
that has been examined on some topic of interest. It is a way to start on a critical review not a
way to end, as is often the case. Secondary sources are highly fallible, and their interpretations of
ideas and research must be tested against the originals. Secondary sources, especially literature
reviews and meta-analyses, are also subject to misinterpretations that can have deleterious
consequences if uncritically relied on (Gomory, 1998, 1999). It helps, for example, if you are
familiar with your secondary sources (i.e., having tested their scholarship by carefully evaluating
their prior work); you might as well use those which have passed critical tests while you continue
to evaluate their current reviews and interpretations.
I mentioned earlier that there is a hint of relativism in Professor Thyer's approach to science, one
that he may not be aware of. As a justificationary thinker he is condemned by his commitments
to offer and find support in order to justify his claims. Let's look at what he considers telling
evidence.
* "credible scientific tests" (Thyer & Wodarski, 1998, p.12)
* "treatment with some credible degree of support" (p. 13)
* "interventions with some significant degree of empirical support" (p. 16)
* "the use of relatively reliable and valid methods of assessment" (Thyer, 1996, p. 122)
* "theories of human behavior and development that are relatively well supported by empirical
research" (p. 123)
* "Teach methods of social work intervention that are relatively well supported by empirical
research studies" (p. 123)
* "all students [should] provide credible scientific evidence that they have helped at least one
client" (p. 123)
* "When taking social work methods classes ask your instructor ... for supportive references" (p.
124)
Professor Thyer's requirements for evidence--that they be credible or relatively reliable or
relatively well supported--all depend on subjective opinion (relativism). Furthermore, he does
not define what he means by these terms, except for "assessment measurement instruments" and
of those he specifies only internal consistency coefficients (which should be .80 or higher to be
useful, p. 122). Every other term is left unspecified and therefore is of little empirical utility
because their meaning is left to subjective, relativistic interpretations not science. As he notes
(Thyer, 1996),
It is difficult to adequately operationally define concepts such as
"relatively reliable and valid," "appreciable research support," "clinical
judgement" [etc.] ... all phrases used in this article. To some extent
these are subjective judgements of professionally trained social workers.
(p. 125)
This statement, although an admission of the actual problem situation (an inability to provide
objective criteria), is an understatement. The entire "Empirical Clinical Social Work Practice"
justificationary edifice rests ultimately on subjective opinion because there are no objective
standards for the "relatively well supported" or "appreciable" or "credible" or "empirically
supported" evidence, concepts that are used throughout that literature. They depend strictly and
not "to some extent" on relativism.
The problem for Professor Thyer and all justificationists is that because of their commitments,
which include not questioning their fundamental inductive theoretical assumptions, they are
stuck. To avoid an infinite regress of having to justify each cited authority by a prior one, they
ultimately must fall back on subjective "final authority" for decisions about what is scientifically
valuable information. This may be beneficial to some of the players in the human services game
while harmful to others. Conveniently, this "authority" often turns out to be the academic or
other recognized expert of a treatment or method, offering them many professional opportunities
for "cashing in" (handbooks, publications, workshops, treatment manuals, to name a few) and
leaving students and workers as supplicants whose only role apparently is to learn by rote the
mechanical steps and implement the well-supported methods of the "authorities"--all because, as
Professor Thyer tell us, "social work practitioners need theory like birds need ornithology"
(Thyer, 1994, p. 148).
Let's examine the claim that theory is at most secondary, if at all relevant, to social work practice
research and to the provision of "credible methods" by social workers in the next section.
Professor Thyer's Claims and Their Application
He compares the work of human service practitioners as "a pragmatically acceptable state of
affairs ... to physicians who routinely prescribe various medications even though the precise
pharmacological mechanism of action remains unknown (which is the state of affairs for most
psychotropics prescribed today)" (Thyer, 1994, p. 150). This is a remarkable recommendation.
He appears to suggest that uncritical dispensing of psychotropic drugs is an appropriate model
for our profession, perhaps at least until the "credible" treatment turns out to be toxic. The
fallibilistic approach, in contrast, promotes "autonomous social work," which demands that the
worker each and every time applying any form of intervention be highly sensitive to potential
negative effects of the treatment and actively look for them. The worker would also be expected
to review not only the supportive literature but also that which is critical of the method or
treatment. This self-critical stance is not within the justificationary paradigm and never
mentioned by Professor Thyer in his discussions. By not actively seeking falsifying information
and only looking for positive evidence, we are susceptible to missing signs of harm, which is
precisely what happened and is happening in the psychotropic drug field.
In the early 1950s neuroleptic (NLP) drugs were hailed as a panacea for problems institutional
psychiatry identified as mental illness. Drug treatment had been provided to "tens of millions of
individuals ... by the mid 1980's, 19 million outpatient NLP prescriptions were written annually
in the United States (Wysosky & Baum, 1989)" (Cohen, 1997, p. 173). Even though the first
reports of harmful side effects,(3) such as movement disorders and parkinsonian syndrome,
occurred as early as 1954 and have been reported consistently ever since (Breggin, 1983, 1991,
1997; Cohen, 1997), the psychiatric professional justificationary response has been to behave as
if no problem existed. Professor David Cohen (1997) of Florida International University has this
to say in one such critique:
Despite [tardive dyskinesia's] significance as a public health problem,
psychiatrists in North America have resisted taking effective steps to
deal
with it. (p. 211)
Individuals on long-term medication (six months or longer) have been found to have permanent
irreversible tardive dyskinesia in approximately 30% of the cases (Gerlach & Peacock, 1995).
Studies reviewing patient non-response rates to NLP drug treatment have found it to vary from
45% to 68% (Cohen, 1994, pp. 143-145). The available well-tested research "suggests" that our
professional justificationary negligence, indifference, and confidence in supposedly validated
treatments such as psychotropics, as promoted by Professor Thyer, is premature. This "effective"
treatment damages the brains of as many as it appears to help. If the psychiatric workers would
have been trained in fallibilistic critical thinking skills, such as holding a critical attitude toward
these sorts of drugs (i.e., looking for falsificationary counterexamples to the "medicine is
working" hypothesis) and knowing the theoretical rationale for their use (which they would be
looking to critique), they might have contributed to an earlier recognition of the seriousness of
the problem, possibly preventing the reduction or destruction of the physical and social value of
countless lives.
One additional point should be made about Professor Thyer's claim that although a theory
justifying a treatment may be wrong, the intervention is not affected because it appears to get the
job done pragmatically. He states the following:
I do not know of a single effective psychosocial intervention applied
within social work that has been explained by a theoretical mechanism of
action which is well supported by empirical research. (Thyer, 2001, p. 21)
It is worth reiterating that empirical research will never do this and has never done this for any
theoretical mechanism. Support simply cannot be had; only rigorous testing by attempts to falsify
the mechanism's effectiveness is possible, and of course it may be that several such theories (i.e.
rival hypotheses) may explain the intervention outcome. There are many shapes of flying
machines (i.e., hot air balloons, rockets, winged aircraft, helicopters). They all may obey and
embody the hypothesized theories of gravity and aerodynamics. Science cannot distinguish
between them. All science can tell us are those machines whose flying ability is falsified by
empirical tests (they crash). But this is not a "Problem of Rival Hypotheses" in the way Professor
Thyer thinks that it is (see p. 21 above for Professor Thyer's contrary view on the issue). Tests
through trial-and-error efforts at falsification of competing hypotheses are the only way available
for frail human science to progress. We can only slowly and only occasionally provide new
knowledge by sometimes eliminating a rival hypothesis. That takes constant critical vigilance
and honesty about our vast ignorance.
Let's turn to the question of whether atheoretical interventions are possible. Professor Thyer
thinks so. He provides in his present article what he thinks is a telling example involving "eye
movement desensitization and reprocessing" (EMDR), invented by Francine Shapiro.
Shapiro developed a very elaborate physiological explanation for why
having
the client track the therapist's finger as it was waved back and forth in
front of the client's eyes was supposed to alleviate anxiety. Tens of
thousands of mental health professionals have been trained in EMDR, and a
large component has been about the theory of this approach. It has now
been
convincingly demonstrated that the theory behind EMDR is invalid. I
suspect
that social work's preoccupation with inventing theoretical accounts to
explain the mechanisms of action of psychosocial interventions is in part
driven by the myth that possessing a strong foundation in theory is a
prerequisite for professional status. (Thyer, 2001, p. 20)
Professor Thyer tantalizingly states that the "theory behind EMDR is invalid," but doesn't tell us
whether that means that the EMDR intervention itself is invalid. Following his argument in his
present article, EMDR should still work "pragmatically" since theoretical accounts according to
him are invented inductively (after the "objective" facts are in) to "explain the mechanisms of
action." At least that is what one might assume from how he presents this example in the article.
But even though he doesn't directly address this, the reason for doing evaluation research in the
first place is to determine "efficacy" of treatment. This must entail, I assert, a review of what
theoretical notions organize the intervention. Thyer's claim that what "has now been
convincingly demonstrated [is] that the theory behind EMDR is invalid" is incorrect. What has
actually been falsified is the "efficacy" of EMDR as a psychosocial treatment based both on
empirical and theoretical grounds and not just the theory "behind" it. Lohr, Tolin, and Lilienfeld
(1998), in a careful critical review of the available empirical literature on this intervention state
the following:
It is clear from the review of these 17 studies that there is little
ordinary evidence and no extraordinary evidence to support the efficacy of
EMDR.... There is little evidence for efficacy above and beyond
nonspecific
effects ... EMDR's behavioral effects were negligible.... We should note
that measures of treatment efficacy have largely neglected the mechanisms
to which eye movements and information reprocessing are directed. These
mechanisms are purported to involve cognitive content and organization and
the manner in which information is processed.... Research on the effects
EMDR has yet to incorporate such measures to show an alteration or
acceleration of the processing of affective information. Specific measures
of emotional processing are necessary in inquiries that test not only the
efficacy of the treatment but the validity of the theory that justifies
its
application. This applies equally to EMDR and other treatments that target
the emotional or cognitive processing of information related to traumatic
events. (pp. 144-145)
As these reviewers make clear, to evaluate efficacy both the treatment and "the theory that
justifies its application" must be tested. Theory is the glue which binds treatment content.
Let's look finally at some of the social work research that Professor Thyer cites in the present
article as examples of quality research not requiring theory or theory testing and which may have
been hindered by coercion on the researchers to use theory unnecessarily. He argues that doctoral
students are often forced to apply some theory to their results more or less as window dressing
after the fact, resulting in bad research. "Often our academic insistence of foisting the issue of
theory testing onto students results in ... [a]n otherwise sound piece of program evaluation ...
being distorted beyond recognition" (p. 14). Such a state of affairs could only occur if research
and its consequent results could be had atheoretically (inductively), a clear empirical
impossibility as argued throughout this article.
What I conjecture Professor Thyer is referring to is the fact that many social work academics
look at some guesses as "grand theories," especially theories established by other domains (i.e.,
psychiatry, psychology, public health) which are used by social work to assert professional
legitimacy. These are theories that we should respect, as we ought our grandparents (for the sake
of their great age and status). Doctoral students need to use them as props to impress the doctoral
committee authorities by their "clubby" theoretical knowledge in order to become respected
members (PhDs). This type of activity, where it exists, is of course silly. As the sociologist C.
Wright Mills (1961, p. 23) notes about theories of this sort, they "all too readily become an
elaborate and arid formalism in which the splitting of Concepts and their endless rearrangements
becomes the central endeavor," neglecting the only essential purpose of theory--to provide
testable potential explanations for and solutions to problems in the real world. There really is no
need for any intellectual concern by hyperactive doctoral committees about theory utilization.
Our doctoral students, if they actually have gathered data and results, have been using theory all
along, but perhaps not "grand theories."
Professor Thyer provides a number of examples in the present article, several apparently by his
doctoral students. It is not quite clear what he means when he offers them "[in] the spirit of these
contemporary qualitative times [as] anecdotal examples of this distortion of the research process"
(p. 14), except to suggest that these case examples are just personal reflections (biased and
thereby likely to be unreliable) and are not therefore to be taken seriously. That would be most
unempirical and to no point. Since I believe they are presented to make a point and his discussion
of them can be compared to published articles of the studies, I'll assume seriousness and review
two of them.
The first study (Baker & Thyer, 2000), Professor Thyer says,
used a case management model and some simple behavioral prompting
strategies to encourage these initially noncompliant mothers to use their
infant apnea monitors for the requisite number of hours every day. She was
very much working via practice wisdom, common sense and some operant
principles. (p. 7)
Although the description is somewhat sketchy, case management, simple behavioral prompting,
practice wisdom, and common sense appear to be theoretical notions that are being postulated by
the researcher as a coherent potential solution to the problem of "mom compliance." The article
itself (Baker & Thyer, 2000) is much more specific and organized in describing what was being
done. Baker and Thyer don't use the terms "commonsense" or "practice wisdom," instead they
carefully describe a "treatment package" (p. 287). This consisted of "education, case
management and behavioral prompting" (p. 288) and was apparently tested by this team
elsewhere, undermining Professor Thyer's claim of no theoretical organization of the
"intervention package":
One prior study evaluating ... compliance with using a home infant apnea
monitor was conducted by Baker and Thyer (in press), who evaluated a
treatment package involving behavioral prompting, education, and case
management. (Baker & Thyer, p. 287)
It seems at least to this reviewer that a set of theoretical conjectures formalized in a treatment
package was being tested, perhaps something to the effect that "the provision of education about
the consequence of compliance or noncompliance together with case management support and
reminders (called behavior prompting) will significantly improve maternal compliance." Thyer
apparently doesn't recognize this as theory testing or using theory, but he would need to spell out
specifically why, for example, case management (a verbal construct denoting a hypothesized
service method) is a theory-free intervention. One hint is that he seems to be calling this set of
theoretical conjectures and its embodiment together a "psychosocial intervention": "she
developed and verified a reliable psychosocial intervention" (p. 15). The process of "developing"
psychosocial interventions requires theorizing (why select this, rather than that) as far as I can
tell. He may somehow see that theory and intervention are separable. I, as argued earlier, cannot,
and since he provides no clarification in his present article, I look forward to his explanation. He
objects to the doctoral committee's high-handed behavior of coercing Baker to consider the
"health belief model" with "which she was relatively unfamiliar" (p. 15) for framing her already
completed research. I, too, would be upset by such uncritical authoritarian behavior. Doctoral
students are supposed to be free to conjecture their own potential solutions to problems of
interest to them as they best see fit with the support and guidance of academic advisors. But the
fact that this committee failed to meet ethical and educational standards in no way suggests that
she was doing atheoretical research before she was asked to review their recommended theory.
Professor Thyer makes reference to the fact that research has found that human service outcome
research frequently is not being done under a "formal theoretical foundation" and that
practitioners are often unable to articulate a theoretical rationale (p. 15). This of course is all
quite distressing (reflects very badly on social work education). The finding that most "social
workers ... practiced a form of `technical eclecticism,' with little heed being paid to theoretical
underpinnings [Jayaratne 1978, p. 621]" (p. 16) can be translated to mean that social workers are
using uncritical, random elements of various theories arbitrarily combined, or they are just
relying on "seat of the pants" approaches (personal whim?), leaving us ignorant about whether
they help, harm, or do anything at all. These findings call for alarm and concern not complacent
acceptance of the results as "justifying" the marginalizing of theory.
Moreover, what actual meaning is there to the phrase "formal theoretical foundation" as opposed
to, say, "informal theoretical foundation"? I have argued that these constructs are just tentative
conjectures regardless of their perceived eminence or authority. Academic arguments about what
differentiates formal and informal theoretical foundations, or theories, from paradigms,
frameworks, etc. are reminiscent of linguistic philosophy's verbal mystifications. Linguistic
philosophy, now moribund, engaged in the mid-decades of the 20th century hundreds of
philosophers who spent their entire professional lives parsing words and their meanings resulting
in many books, articles, and intellectual authority while stifling the growth of knowledge
(Gellner, 1979). The fact that most of our social work graduates cannot provide the rationale for
what they do suggests a serious lack of critical thinking skills and not something that should be
used as a reason for arguing against theoretical understanding either by researchers or workers.
Professor Thyer offers another research example, Vonk and Thyer, (1999), where "service
agencies' programs are not based on any particular theory of human behavior, and in such cases
it is a disservice to make a pretense of such linkage" (p. 18, emphasis in original). He claims that
Vonk's study turned out to be the most methodologically sophisticated
study
ever published on the outcomes of college student counseling centers [and]
I believe that there is a legitimate role for the design and conduct of
outcome studies on social work practice ... which are essentially
theory-free exercises of evaluation research. (p. 18)
This article allows us to review the possibility of a theory-free study and provides evidence of
the level of research rigor that Professor Thyer deems satisfactory for "credible evidentiary
support." He tells us:
In this instance the counseling center was not oriented towards a
particular theoretical model ... nor did [Vonk] construe her outcome study
as a test of any theoretically driven model of psychotherapy. It was a
straightforward, unambiguous, pristine evaluation of the center's services
and of immense value to the administrators running the center (since the
outcomes looked good). (p. 18)
One can't help notice the justificationary enthusiasm and fatal error that is the result of it when he
tells us that what was of "immense value" to the administrators was that the results looked good
(i.e., supported effectiveness). Nothing new is learned by positive results although they are good
for funding and self-promotion and, as suggested earlier, are easy to find often due to the wellrecognized effects of confirmatory biases (Klayman, 1995). Positive results just confirm what
you already believe and can have no further inductive benefit. Real help would have been
negative results. Findings, which counter our current assumptions and beliefs, provide new
knowledge not previously known. But a closer look at what was done in this evaluation reveals
that nothing in fact could be learned about the causal relationship between the services provided
and the outcomes reported due to the inadequacy of this, "the most methodologically
sophisticated study ever published" (p. 18) on these issues. This unqualified praise would have
more "authority" if it were not being offered by the second author of the study (Vonk & Thyer,
1999).
To begin with, the article itself--unlike Thyer, who states the study's purpose very generally as
"to evaluate the outcomes of ... services at a university student counseling center" (p. 18)--tells us
that the purpose was to evaluate "the effectiveness of short-term treatment in reducing the
psychosocial symptomatology of university counseling center clients" (Vonk & Thyer, 1999, p.
1095). "Short-term" treatments are at a minimum theoretically distinguishable from those which
are not (i.e., long-term treatments), at least as to time. The article further specifies the type of
short-term treatment to be between 4 and 20 sessions and notes, "Although unspecified, the
treatment variable may be better understood by describing the professional backgrounds of the
CC [counseling center] staff members" (p. 1098). It then provides the various methods practiced
by each of the workers (i.e., family systems, behavioral and humanistic techniques, interpersonal
theory, cognitive behavioral approaches). The article goes even further by distinguishing the
specific treatment used at the CC from others that are not:
Due to the preponderance of individual, non-specific short-term treatment
at the CC, as opposed to other treatment methods such as group therapy or
couple counseling, the focus of the evaluation was on the former. (p.
1097)
Professor Thyer states there were many counselors at the center, hinting there were too many to
really get a handle on methods used (Thyer, 2000, p. 18). There were just 8 counselors (these
may be too many) in the study. The article gives a description of what the various approaches of
each counselor appeared to be:
Some of the counselors identified themselves as working primarily from one
perspective (i.e., short-term psychodynamic or cognitive-behavioral), most
identified themselves as `eclectic' and drew from more than one model.
(Vonk & Thyer, 1999, p. 1099)
This at a minimum tells us that they are testing the efficacy of "eclectic short-term treatment"
and not some broad set of general services, which could have been even further defined if the
researchers would have taken the time; the counselors were interviewed about their treatment
approaches. So, some sort of theory of treatment (eclectic short-term treatment) is being
evaluated. It may not be the narrow version of theory that Professor Thyer wishes to call theory,
but it is theory nevertheless. What is being tested is admittedly an ill-defined "theoretically
driven model," but that is just a part of a careless methodology, which claims a disinterest in
theory, not a statement about the theory-free nature of the treatment. This methodological laissez
faire is further demonstrated by the sampling and model used to evaluate effectiveness. A
nonrandom purposive sample of 11.8% of the total population of utilizers of the center was used
in the study, which was a quasi-experimental delayed treatment control group design. The
treatment group had 41 subjects and the control group had 14. The findings not surprisingly
confirmed the expectations of the researchers and provided joy to the administrators. The only
problem is that using a quasi-experimental delayed treatment unbalanced control group model
with a purposive sample cannot even tell us whether change occurred because of the "eclectic
short term treatment" or due to placebo (the expectation of getting effective treatment, although
perhaps none was provided).
The fact that the wait-listed group didn't improve while the treatment group did only tells us, at
best, that change occurred for those having an expectation of something being done immediately
to them, while no change occurred among those who anticipated services only sometime in the
future. So we know that change occurred when clients expected treatment. But that does not
provide any evidence for treatment effectiveness per se, and this type of study cannot make the
critical distinction between treatment and placebo. The non-random nature of the research
prohibits causal assertions, especially with the uneven sample size of the groups and the very
small number in the delayed treatment control group, which suggests low power. For example
Kazdin and Bass (1989, p. 144) recommend a minimum sample size of 27 for groups for studies
comparing treatment versus no-treatment groups in psychotherapy research. The threats to
internal validity which are not addressed by this type of research are selection-maturation,
instrumentation, differential statistical regression, and the interaction of selection and history
(Cook & Campbell, 1979, pp. 103-117), which, along with the lack of randomized selection from
the population and randomized distribution to the groups, reinforces the illegitimacy of any
causal inferences.
Keeping with the study's justificationary agenda (promote any semblance of positive outcome
and minimize or ignore critical falsifying issues), no demographic information is provided as to
how this small purposive sample of 55 individuals compares to the total population (465 clients)
seen at the counseling center. The only information offered by the authors is that an unpublished
source (i.e., one not easily available for review) with "raw data" (Raymond, 1996) found no
differences based on two mental health measures (GSI and SCL-90-R) (Vonk & Thyer, 1999, p.
1103). Such evidence provides no information about the study's objective demographic
representativeness (i.e., gender, age, ethnicity, religious affiliation, level of education,
employment, marital status, etc.) which would be needed to claim population representativeness;
instead it relies on measures of mental health status and "criteria for psychiatric disorders" as
"stand ins," concepts which at a minimum are in controversy as argued earlier.
These methodological problems are not ones that Professor Thyer finds major impediments to
the type of empirical work he thinks useful for social work outcome research. In rebuking
Epstein (1995) for arguing that randomized controlled experimental trials are essential to get at
the critical testing of causal outcomes of treatments, he states,
The present author personally subscribes to a much less stringent
standard,
recognizing the value of quasi-experimental and single-system research
designs in terms of their ability to isolate credible findings. (Thyer,
1996 p. 125)
He goes on to cite William Reid's Task Centered Practice (TCP) as having been developed by
such "credible quasi-experimental studies [to] suggest that TCP can be a very helpful social work
intervention" (p. 125). What rigor is there in words such as "credible" or "suggest"? How do they
relate to cause-effect determination? A cause-effect relationship either is or is not. If research is
"suggestive" of a causal relationship, other research may be "suggestive" of no causal
relationship. I have argued that such statements are simply personal judgments, which cannot be
used for evaluation scientifically (through critically falsifying tests).
As Kazdin and Weisz (1998) state, agreeing with Chambless and Hollon (1998), treatments to be
labeled efficacious (note they refrain from terms like credible) "must have been shown to be
more effective than no treatment, a placebo, or an alternate treatment across multiple trials
conducted by different investigative teams" (p. 22). As a fallibilist, I would add that the label
"efficacious" should be held tentatively and tested each and everytime the treatment is applied,
rather than, as suggested by their quote, that after some limited number of trials, if "successful,"
the label efficacious may be applied more or less permanently and no further critical evaluation
needs to occur because efficacy has been demonstrated. Perhaps Professors Reid and Thyer can
judge their work to be "credible" and "suggestive," but their use of research models that cannot
assert causal relationships between treatment and outcome due to their limitations will always
allow others to argue the alternative with equal validity (i.e., TCP is not credible and is not
suggestively helpful). This is a debate about authority and power not science. It epitomizes the
justificationary dilemma. In order to validate, support must be found, but no amount of it is quite
enough to find the truth, and we don't know what good support looks like objectively, so experts
have to subjectively judge what is credible since we can't get at the objectively true. And if there
is disagreement, those with more authority get to decide what is more credible, but neither
Professor Thyer nor any other justificationist can explain how being credible relates to being
true.
It should be said on Professor Reid's behalf that his view about how science ought to be done in
social work differs from Thyer's atheoretical approach and dare I say, hints at fallibilism:
Any system of social treatment is supported by a body of theory.... We can
at least demand ... that a theory be cast into a testable form. This means
that theoretical formulations need to be accompanied by a specification of
how they can be tested.... The need for problem-oriented, testable theory
in clinical social work has guided our efforts to develop the theoretical
base of the task-centered model. (Reid, 1978, pp. 12-17)
Reid also candidly admits the limitation of some of his quasi-experimental research and thereby
disagrees with Thyer' notions of causal research when he tells us that
Early studies of the model [TCP] consisted largely of exploratory tests of
its application.... They did not, however give us definitive data on the
effectiveness of our methods. Although outcome data were accumulated, none
of the studies was adequately controlled; that is, we did not use control
groups or equivalent procedures that would permit us to conclude that the
treatment methods made a difference in how cases turned out.... Our first
controlled test of task-centered methods consisted of ... 32 clients ...
randomly assigned to experimental and control conditions. (p. 225)
Even Reid, an expert Thyer appeals to for support of so called "credible" research, is careful to
point out that this sort of research does not permit causal inference and that randomized
methodology is required for that.
Concluding Remarks
Professor Thyer deserves a great deal of credit for again raising a very important set of issues
clearly before social work educators and scholars. What is science? What should the relationship
of the profession of social work be to science? What research methods should be used in various
types of research? What has theory to do with social work practice and how should social work
research be conducted?
He has consistently argued for his Empirical Social Work Practice views often against those who
have disagreed (i.e., Witkin, 1991). In his current article, he argues for these views by
contending that the preoccupation with "theory" and theory testing in our field has limited the
empirical development of effective interventions, which often do not use and do not need theory-only their pragmatic capability really counts. I have argued that Professor Thyer should be
commended for noticing that social work has major difficulties with its educational approach as
well as serious methodological limitations for acquiring a knowledge base of effective
treatments, but he should be critiqued rigorously for his failure to fully and carefully engage with
the essential scientific issues entailing philosophy and method as well as for suggesting that
some members of the profession (i.e., students and direct service workers) don't have to think too
critically but should simply apply pragmatic knowledge leaving theory, if at all necessary, to the
academics.
I argue that Professor Thyer, due to his justificationary approach to science, has not been able to
see that efforts at finding proof, support, and credibility for his atheoretical pragmatic research
are doomed to failure because no such proof or support is possible. He appears to be unaware of
the fallibilistic alternative which I have presented (he never discusses it in any of his writings),
although he argues his position by claiming to know philosophy of science. If he had been aware
of Popper's falsification of induction, he would not have been able to argue the separation of
theory from observations, or that objective observations can add up to theory, or that
interventions need no theory, without at least having to confront the problem of induction and
provide his counterargument as to why he would discount it.
I have also presented the difficulties with the justificationary position held by Professor Thyer
and many other social work authors. Most importantly, it leads to an all out effort at searching
and providing proof for your beliefs and not to a critical evaluation of them. This approach is
often exemplified by a justificationary author subtly changing the descriptions found in the
original sources to suit the justificationary claim of the author, or using research methods which
cannot measure what is being tested (i.e., intervention effectiveness), or employing vague terms
which subjectify, confuse, and reduce understanding, or using selectively some primary or
secondary sources because they "support" the author's claim while ignoring others which may be
critical or have falsified the view held. Justificationary research cannot lead to clarity but only to
unmeasureable and unhelpful statements of future "Treatment Utopias" such as, "Simple
behavioral and case management interventions show great promise" (Baker & Thyer, 2000, p.
285).
Recognizing that proof cannot be had but rigorous tests can on occasion lead to falsifications of
our theories and our interventions argues for our profession taking the critical stance seriously by
making fallibilistic critical thinking a necessary component of social work education at all levels.
The aim should be to create "autonomous social workers" who can decide through rigorous open
debate and tests what are better and worse policies and interventions. This approach would serve
to promote and meet our ethical commitment to help our clients receive the best possible services
while modeling for them the autonomy we hope to help them acquire.
It consists of identifying the real world problems we are interested in (i.e., a client problem), then
hypothesizing a possible solution or effective intervention, developing a critical test for it (this
will vary depending on the nature of what is being tested), and then testing it. If the test is
passed, we can continue using the idea, theory, intervention, or policy, but always with our
critical faculties alert to potential negative feedback through trial and error. If it fails the test
(empirically several), we abandon it and hypothesize new alternatives both eliminating in so
doing false knowledge and discovering tentative true knowledge, hopefully thereby making our
and our clients' world a little better off.
(*) Editor's Note: Due to a composition error, Table 2 was not included in the copyedited page
proofs sent to Professor Gomory to prepare his reply. This table appeared in Thyer's original
manuscript and is included in the article published here.
(1) For a fallibilistic critique of anthropological methods of this sort predating grounded theory,
see Jarvie (1967).
(2) See also McNeece and DiNitto (1998, pp.180-209) for an evidence tested discussion of drug
policy and its consequences by social work authors.
(3) "Side effects" is the label placed by promoters of a drug on effects that are not beneficial to
the promoters' view about the drug's usefulness. Psychoactive chemicals only have various
effects, words like "main effects" and "side effects" are propaganda terms to mislead (i.e., help
the pharmaceutical/industrial complex sell its wares, Wong & Gomory, 2000) not to provide
scientific information.
REFERENCES
Agassi, J. (1975). Science in flux. Dordrecht, Holland: Reidel.
Agassi, J (1985). Technology: Philosophical and social aspects. Dordrecht, Holland: Reidel.
American Psychiatric Association. (1994). Diagnostic and statistical manual of mental disorders
(4th ed.). Washington, DC: Author.
American Psychiatric Association. (2000). Diagnostic and statistical manual of mental disorders
(4th ed. TR). Washington, DC: Author.
Andreasen, N. C. (1997). Linking mind and brain in the study of mental illness: A project for a
scientific psychopathology. Science, 275, 1586-1592.
Andreasen, N. C. (1999a). Deconstructing schizophrenia. American Psychiatric Association
1999 Institute on Psychiatric Services [Online]. Available at http://psychiatry.medscape.com/
Medscape/CNO/1999/APA/APA_CS/APA-03.html.
Andreasen, N. C. (1999b, September). A unitary model of schizophrenia. Archives of General
Psychiatry, 56, 781-787.
Arkava, M. I., & Lane, T. A. (1983). Beginning social work research. Boston: Allyn and Bacon.
Baker, L., & Thyer, B. A. (2000). Promoting parental compliance with home infant apnea
monitor use. Behaviour Research and Therapy, 38, 285-296.
Bloom, M., Fisher, J., & Orme, J. G. (1995). Evaluating practice: Guidelines for the accountable
professional (2nd ed.). Englewood Cliffs, NJ: Prentice Hall.
Breggin, P. (1983). Psychiatric drugs: Hazard to the brain. New York: Springer.
Breggin, P. (1991). Toxic psychiatry. New York: St. Martin's.
Breggin, P. (1997). Brain disabling treatments in psychiatry: Drugs electroshock, and the role of
the FDA. New York: Springer.
Briar, S. (1967). The current crisis in social casework. In Social work practice (pp. 19-33). New
York: Columbia University Press.
Bunge, M. (1984). What is pseudoscience? The Skeptical Inquirer, 9(1), 36-47.
Campbell, D. T. (1987). Evolutionary epistemology. In G. Radnitzky & W. W. III Bartley (Eds.),
Evolutionary epistemology, rationality, and the sociology of knowledge (pp. 47-89). LaSalle, IL:
Open Court.
Campbell, D. T., & Stanley, J. C. (1966). Experimental and quasi-experimental designs for
research. New York: Houghton Mifflin Company. (Originally published 1963)
Carnap, R. (1959). The elimination of metaphysics through logical analysis of language. In A. J.
Ayer (Ed.), Logical positivism. (pp. 60-81). New York: Free Press.
Chambless, D. L., & Hollon S. D. (1998). Defining empirically supported therapies. Journal of
Consulting and Clinical Psychology, 66, 7-18.
Cohen, D. (1989). Biological bases of schizophrenia: The evidence reconsidered. Social Work,
34, 255-257.
Cohen, D. (1994). Neuroleptic drug treatment of schizophrenia: The state of the confusion.
Journal of Mind and Behavior, 15, 139-156.
Cohen, D. (1997). A critique of the use of neuroleptic drugs in psychiatry. In S. Fisher, & R.
Greenberg (Eds.), From placebo to panacea: Putting psychiatric drugs to the test, (pp. 173-228).
New York: Wiley.
Cook, T. D., & Campbell, D. T. (1979). Quasi-experimentation: Design and analysis issues for
field settings. Boston: Houghton Mifflin.
Epstein, W. (1995). Social work in the university. Journal of Social Work Education, 31, 281292.
Fisher, J. (1973a). Has mighty casework struck out? Social Work, 18, 107-110.
Fisher, J. (1973b). Is casework effective? A review. Social Work, 18, 5-20.
Fisher, J. (1976). The effectiveness of social work. Springfield, IL: Charles C. Thomas.
Gambrill, E. (1999). Evidence-based practice: An alternative to authority based practice.
Families in Society, 80, 341-350.
Gellner, E. (1979). Words and things: An examination of, and a critical attack on, linguistic
philosophy (Rev. ed.). London: Routledge & Kegan Paul.
Gerlach, J., & Peacock, L. (1995). Intolerance to neuroleptic drugs: The art of avoiding extra
pyramidal symptoms. European Psychiatry, 10(Suppl. 1), 275-315.
Gomory, T. (1997a). Does the goal of preventing suicide justify placing suicidal clients in care?
No. In E. Gambrill & R. Pruger (Eds.), Controversial issues in values, ethics and obligations (pp.
70-74). Boston: Allyn and Bacon.
Gomory, T. (1997b). Social work and philosophy. In M. Reisch & E. Gambrill (Eds.), Social
work in the 21st century (pp. 300-310). Thousand Oaks, CA: Pine Forge.
Gomory, T. (1998). Coercion justified--Evaluating the training in community living model--A
conceptual and empirical critique. Unpublished doctoral dissertation, University of California at
Berkeley.
Gomory, T. (1999). Programs of assertive community treatment (PACT): A critical review.
Ethical Human Sciences and Services, 1(2), 147-163.
Gregory, R. L. (Ed.). (1987). The Oxford companion to the mind. New York: Oxford University
Press.
Grinnell, R. M., Jr. (Ed.). (1993). Social work research and evaluation (4th ed.) Itasca, IL: F. E.
Peacock.
Jarvie, I. c. (1967). The revolution in anthropology. Chicago: Regnery.
Jayaratne, S. (1978). A study of clinical eclecticism. Social Service Review, 52, 621-631.
Kazdin, A. E., & Bass, D. (1989). Power to detect differences between alternate treatments in
comparative psychotherapy outcome research. Journal of Consulting and Clinical Psychology,
57(1), 138-147.
Kazdin, A. E., & Weisz, J. R. (1998). Identifying and developing empirically supported child and
adolescent treatments. Journal of Consulting and Clinical Psychology, 66(1), 19-36.
Kirk, S. A., & Kutchins, H. (1992). The selling of DSM: The rhetoric of science in psychiatry.
New York: Aldine De Gruyter.
Klayman, J. (1995). Varieties of confirmation bias. In J. Busemeyer, D. L. Medin, & R. Hastie
(Eds.), Decision making from a cognitive perspective (pp. 385-418). New York: Academic
Press.
Kutchins, H., & Kirk, S. A. (1997). Making us crazy: DSM: The psychiatric bible and the
creation of mental disorders. New York: Free Press.
Lohr, J. M., Tolin, D. F., & Lilienfeld, S. O. (1998). Efficacy of eye movement desensitization
and reprocessing: Implications for behavior therapy. Behavior Therapy, 29, 123-156.
Magee, B. (1997). The philosophy of Schopenhauer. Oxford: Oxford University Press.
McNeece, C. A., & DiNitto, D. M. (1998). Chemical dependency: A systems approach. Boston:
Allyn and Bacon.
Medawar, P. (1988). Memoir of a thinking radish. Oxford: Oxford University Press.
Miller, D. (1994). Critical rationalism: A restatement and defense. Chicago: Open Court.
Mills, C. W. (1961). The sociological imagination. New York: Grove.
Munz, P. (1985). Our knowledge of the growth of knowledge: Popper or Wittgenstein. London:
Routledge & Kegan Paul.
Munz, (1993). Philosophical darwinism. London: Routledge.
Peckham, (1979). Explanation and power. New York: Seabury.
Popper, K. (1968). The logic of scientific discovery. New York: Harper and Row. (Original work
published in 1959 by Basic Books)
Popper, K. (1989). Conjectures and refutations: The growth of scientific knowledge (2nd ed.).
New York: Basic Books. (Original work published 1965)
Popper, K (1974). Autobiography of Karl Popper. In P. A. Schilpp (Ed.), The philosphy of Karl
Popper (Vol. 1., pp.3-181). La Salle, IL: Open Court.
Popper, K. (1979). Objective knowledge. London: Oxford University Press.
Popper, K. (1983). Realism and the aim of science. Totowa, NJ: Rowman & Littlefield.
Radnitzky, G., & Bartley, W. W., III. (Eds.). (1987). Evolutionary epistemology, rationality, and
the sociology of knowledge. LaSalle, IL: Open Court.
Reid, W. (1978). The task-centered system. New York: Columbia University Press.
Reid, W. J. (1998, November). Empirically-supported practice: Perennial myth or emerging
reality. Lecture, School of Social Welfare, University at Albany, State University of New York.
Raymond, V. V. (1996). Factors related to referral at a university counseling center. Unpublished
raw data.
Rosen, A., Proctor, E. K., & Staudt, M. (1999). Social work research and the quest for effective
practice. Social Work Research, 23, 4-14.
Rubin, H., & Babbie, E. (1989). Research methods for social work. Pacific Grove, CA: Brooks/
Cole.
Schaler, J. A. (2000). Addiction is a choice. Chicago: Open Court.
Schopenhauer, A. (1969) The world as will and representation (E. F. J. Payne, Trans., Vol. 1).
New York: Dover.
Schutt, R. K. (1996). Investigating the social world: The process and practice of research.
Thousand Oaks, CA: Pine Forge.
Solomon, P., & Draine J. (1995a). Jail recidivism in a forensic case management program.
Health and Social Work, 20(3), 167-172.
Solomon, P., & Draine J. (1995b). One-year outcomes of a randomized trial of case management
with seriously mentally ill clients leaving jail. Evaluation Review, 19(3), 256-273.
Strauss, A. L. (1987). Qualitative analysis for social scientists. Cambridge, England: Cambridge
University Press.
Szasz, T. (1990). Sex by prescription. Syracuse, NY: Syracuse University Press.
Szasz, T. (1992). Our right to drugs: The case for a free market. New York: Praeger.
Thyer, B. A. (1994a). Are theories for practice necessary? No! Journal of Social Work
Education, 30, 148-151.
Thyer, B. A. (1994b). Social work theory and practice research: The approach of logical
positivism. Social Work and Social Services Review, 4, 5-26.
Thyer, B. A. (1995). Constructivism and solipsism: Old wine in new bottles? Social Work in
Education, 17(1), 63-64.
Thyer, B. A. (1996). Guidelines for applying the empirical clinical practice model to social work.
Journal of Applied Social Science, 20, 121-127.
Thyer, B. A. (2001). The role of theory in research on social work practice. Journal of Social
Work Education, 37, 9-25.
Thyer, B. A., & Myers, L. L. (1999). On science, antiscience, and the client's right to effective
treatment. Social Work, 44, 501-504.
Thyer, B. & Wodarski, J. (Eds.). (1998). Handbook of empirical social work practice. (2 Vols.).
New York: Wiley.
Vonk, E. M., & Thyer, B. A. (1999). Evaluating the effectiveness of short-term treatment at a
university counseling center. Journal of Clinical Psychology, 55, 1095-1106.
Witkin, S. L. (1991). Empirical clinical practice: A critical analysis. Social Work, 36, 158-163.
Wolman, B. B. (Ed.). (1973). Dictionary of behavioral science. New York: Van Nostrand
Reinhold.
Wong, S. E., & Gomory, T. (2000, February). Clinical social work and the
psychiatric/pharmaceutical industrial complex. Paper presented at the 46th Annual Program
Meeting of the Council on Social Work Education, New York.
Wysosky, D. K., & Baum, C. (1989). Antipsychotic drug use in the United States, 19761989.
Archives of General Psychiatry, 46, 929-932.
Address correspondence to: Tomi Gomory, School of Social Work, Florida State University,
Tallahassee, FL 32306-2570; e-mail: tgomory@mailer.fsu.edu.
Tomi Gomory is assistant professor, School of Social Work, Florida State University.
This document provided by HighBeam Research at http://www.highbeam.com
Download