Hertog, Introduction to explication

advertisement
Introduction to Explication
Explication is the process whereby a researcher develops and applies empirical tests of theory.
Though I will present the process as a series of discrete steps for the sake of explanation, in practice
researchers often do not follow such a clear, linear process. They may jump forward or backward and
may be working on multiple steps at once. I’ll be outlining a sort of ‘ideal’ model of explication, but
reality is invariably more complicated and imperfect.
Conceptualization
Conceptualization refers to the careful review of the concepts, relationships, and so on that
make up the theory under consideration. The goal is to generate a clearer and more concrete set of
‘constructs’ that can be measured empirically. Most ideas or concepts we have are vague and illformed—general notions that are not well thought-out or clarified. We may also have some ideas about
how they fit together –some informal ‘theories’ about how the world works.
Let’s say we have a simple theory about cigarette consumption: “Smoking causes cancer.” Most
people probably believe this theory, but its overly general nature leaves it open to a number of
questions. What if you don’t inhale? Does smoking cigarettes where the tar and nicotine have been
removed cause cancer? What kinds of cancer? And when we say ‘cause’ what are we claiming? I know
people who grew old while smoking heavily their whole lives and then died from other causes than
cancer. Does that disprove the theory? Thinking our terms through more carefully and giving them
much more precise definitions can help a great deal.
To distinguish between these vague notions we will call them concepts while the much more
carefully defined, precise and concrete ideas generated via the process of conceptualization will be
called constructs. The vagueness and imprecision of concepts makes it very difficult to identify them in
the empirical world and to measure them effectively, while constructs should be much more amenable
to identification and measurement.
To develop more precise and measurable definitions the researcher will draw upon his own
experience and theoretical insights, evaluate what others—especially other researchers—have said
about the concepts, and see how they fit into the theoretical scheme he has developed. That is, he will
think long and hard about what he means by his concept, will read up on what popular and scholarly
sources have said about it, and will see how the definition he comes up with works when he tries to
relate it to other concepts.
More formal means of conceptualization are: 1) Meaning analysis, 2) Inclusion analysis, 3)
Exclusion analysis, and 4) Evaluation of theoretical necessity. These are often undertaken together,
(especially 2 and 3), providing a more effective process than any of them would individually.
Meaning analysis
In meaning analysis, the researcher compares her definition of the concept to those available in
dictionaries, existing studies and/or prior formal conceptualizations that she is able to access.
Sometimes specialized dictionaries or encyclopedias provide a review of the uses that the term has had
in the discipline. Sometimes a theorist known for her work with the concept provides an extensive
explanation of her understanding of the concept or an argument for a particular definition. If these
scholarly approaches are not available, the researcher may go to more popular sources such as
Webster’s Dictionary or Roget’s Thesaurus. Using relatively generic definitions as a starting place, she
may add her own more specific requirements to build a ‘conceptual definition’ that will translate the
vague original concept into a more tightly and well-defined ‘construct.’
A common part of this process is the identification of subconcepts—ideas that refer to only a
portion of the cases or examples covered by the larger concept. By identifying the smaller, more
specific ideas that combine to the form the more general concept under study, it is possible to increase
our understanding of the nature of the more-encompassing idea. It may be the case, for example, that
the theorized relationship between two concepts really is only based upon a relationship between two
subconcepts. The greater specificity of this relationship increases our understanding and provides
greater guidance with regard to action related to our theory.
Inclusion and exclusion analyses
To carry out these forms of analyses, the researcher thinks up objects, events, etc. (cases) that
he considers examples of the concept or else feels should not be considered examples of the concept.
The current conceptual definition is applied to the event to determine whether it will be identified as an
example of the concept or not. The goal is to see that the conceptual definition includes cases that the
researcher feels should be included and excludes cases he feels should be excluded. When a case that
should be included is not or when a case that should be excluded is identified as an example of the
concept, the definition has led to an error. By adjusting the definition so that cases that should be
included are included and cases that should be excluded are excluded, the definition is strengthened
and clarified.
Theoretical necessity
This exercise calls for a thoughtful evaluation of what characteristics are necessary for a concept
to fit within the theoretical framework applied by the researcher. For example, if he is working within a
form of information processing theory, what are the requirements for the concept of ‘working
memory?’ What characteristics of potential definitions would make the concept unsuitable for use in
the theory? If a definition of working memory portrays it as disembodied—detached from the physical
brain in some way—that portion of the definition might need to be jettisoned in order for the concept
to fit within the larger information-processing theory.
Let’s consider an example:
Jim Gleason, who recently received his doctorate from this college, focused his dissertation on
research meant to help conceptualize ‘interactivity’ as it relates to new communications technologies.
He objected to earlier definitions that presented interactivity as a feature of the media themselves,
feeling that interactivity was better understood as an experience on the part of the users of the
technology. He looked at a number of previous definitions found in the research literature and tried to
identify features they had in common and areas where they diverged [Meaning analysis]. He built a
construct of interactivity that included several subconcepts and used a number of means to determine
whether the concept and its subcomponents held together in empirical study. [Meaning analysis]
In his dissertation defense he was asked to determine which of several scenarios constituted
cases of interactivity. Would, say, two capable users communicating with each other but not really
using the technical features normally considered the interactive features of a medium be considered a
case of interactivity? Was it possible to have interactivity when only one of a communicating couple
perceived the communication as an interaction? That is, if a boss saw the communication as a one-way
directive and the subordinate saw it as a dialogue in real time with real mutual influence and responses,
was that a case of interaction? [Inclusion and exclusion analysis]
Finally, certain features of his concept of interactivity were crucial to his theory and derived
hypotheses (theoretical predictions). We asked him what parts of his construct could he give up and still
maintain the theory and which features were crucial [Theoretical necessity] He tested the hypotheses
using empirical methods and data, found partial support and partial nonsupport, ultimately reanalyzing
his construct in the face of the findings [Reconceptualization).
Operationalization
Once the basic ideas reflected in the theory are adequately defined, the researcher develops a
plan for empirically testing the theory. To carry out this process, she will need to choose a basic method
of research (experiment, survey, content analysis, observation, etc.) and develop the measures used in
the chosen method. The method is usually chosen first, as it determines what types of measures can be
used effectively and efficiently, and the choice of method is usually the more significant. Additionally, it
is usually easier to find a measure to fit a given method than vice versa.
Choice of method will hinge upon the information needs with regard to the theory or question
under investigation. If an estimate of population attitudes about radio talk show hosts is needed, the
likely method will be a survey method. Further specification of information needs will then lead toward
a particular form of survey, the range of questions included, and so on. A need for a causal test with
strong internal validity could signal that a laboratory experiment would be the most efficient and
effective method, and so on.
Depending upon the method chosen, the researcher will need to develop measures meant to
represent the constructs included in the research questions, hypotheses, and so on. All constructs,
including the relationships specified in the theory, must be reflected in the method, usually by a form of
measurement. Survey questions are a form of measurement. Experimental manipulations are, also.
That is, exposing one group to a violent movie and another to a non-violent one is a measure of
exposure to violent movies with one group scored ‘1’ for having seen the violent film and the other
scored ‘0’ for not having seen it. Coding rules form the measure for content analyses, and notes the
measures for observation (unless a more formalized code sheet is developed).
Choice of method and measure reflect the needs of the study, as noted above, but are also
influenced by their cost (and the availability of funding), their difficulty of application, the time frame
from start to finish, familiarity of the researcher with the method, disciplinary preferences, and other
factors. In essence, a cost v. value of information gained evaluation is carried out by the researcher.
Application
At some point the model for the research must be realized in the empirical world. That is, the
plans for the study must be carried out in an imperfect world. The actual study never perfectly reflects
the plan, and the further it deviates from that plan the less valid conclusions drawn from the data
collected are likely to be. Though I won’t go into detail here, actually running research studies is both a
science and an art. Quality research is much harder to produce than mediocre research. Knowing when
and how to adjust studies ‘in the field’ is important to generating quality data or salvaging a study that
has been sidetracked by unexpected events, subject or research team behavior, etc.
From concerns over the amount of white space and skip patterns on paper surveys to the
quality of recruitment for focus group studies to the size of the screen subjects used to show subjects a
trailer prior to film release, the details and the consistency and quality of research application can make
or break a study.
As an example, I was part of a research study that included focus groups with community
members from a western Minnesota community. The goal was to determine what types of beliefs and
attitudes they held that supported their heavy-red-meat diet, and what sorts of beliefs they had that
might be used in a campaign to encourage adoption of a healthier diet. It was winter and as the dates
for the groups approached a fierce winter storm hit. Eighteen inches of snow fell and a forty-mile-anhour wind blew up. Two of four groups were cancelled and I and the moderator had to drive through
the remnants of the storm to salvage the two remaining groups. Because some members of the final
two groups did not live in town, we asked some of those recruited for the earlier groups to fill in the
missing slots. The research was clearly changed from what originally had been planned and some of the
questions meant for the earlier groups were asked in the remaining ones. Though the effectiveness of
the method that was actually carried out compared to what was planned can be debated, the point is
that we had to be flexible and attempt to generate the best data we could under the circumstances.
Studies have shown that even fairly subtle changes of voice by experimenters giving
instructions, the demographics, attractiveness, and attitudes of interviewers, the physical surroundings,
noise, etc. during studies and many other factors can influence results in sometimes powerful ways.
Careful adherence to research plans and attention to detail are often crucial for research success.
Once the data are collected and carefully stored (and copies made) the analysis begins—and
we’ll discuss that later in the semester.
Download