September 27, 2007

advertisement
September 27, 2007
Chapter 2: Examining Preliminary Considerations
This chapter looks at mixed methods paradigms, elements of quant and qual
research, and types of research problems best addressed by mixed methods. The terms
“worldview” and “paradigm” address how we look at the world and therefore how we
conduct our research.
Different Worldviews or Paradigms
“…[B]ehind each study lies assumptions the research makes about reality, how
knowledge is obtained, and the methods of gaining knowledge.” Four worldviews are
shown in Table 2.1. I used blue text to highlight elements that might apply to the test
error analysis research.
Table 2.1 Four Worldviews Used in Research
Postpositivism
- Determination
- Reductionism
- Empirical
observation and
measurement
- Theory
verification
Constructivism
- Understanding
- Multiple
participant
meanings
- Social and
historical
construction
- Theory generation
Source: Creswell (2003)
Advocacy and
Participatory
- Political
- Empowerment and
issue oriented
- Collaborative
- Change oriented
Pragmatism
- Consequences of
actions
- Problem centered
- Pluralistic
- Real-world
practice oriented
They say the worldviews “have common elements, but take different stances on”
them in terms of the ontology (nature of reality), the epistemology (how we learn what we
know), the axiology (roles of values in research), the methodology (process of research),
and the rhetoric (language of research).
Here, “methodology” is a process rather than a “philosophical framework”. The
methodology associated with constructivism later in this chapter is severely limited and
indicates a lack of understanding of a constructivist view of the research process.
Postpositivist research takes a “top down” approach from theory to an experiment
that confirms or denies it. Constructivism takes a “bottom up” approach using the
participants’ views to build broader themes and generate a theory interconnecting them.
Clearly, my study is on the constructivist side of this continuum. I agree.
In fact, you and I had assumed that my study stemmed from a constructionist
epistemology. (Are you intentionally distinguishing between “constructionist” and
“constructivist” or was this a typo?) Oops! Typo! Thanks for catching it! This Markup
setting is weird and will take some getting used to! However, based on what they’re
saying, the pragmatism seems to fit as well, with hints of the advocacy/participatory
paradigm.
Quoting this excerpt from earlier issues of my study plans, I figured that
…[the] test error analysis tool should help students place meaning on their graded
tests and turn the assessment process into a learning process and meta-cognitive
aid. The tool allows them to interpret their testing errors, analyze the results, and
make plans to minimize error recurrence. In turn, the tool’s documentation of the
students’ cumulative assessment could help them learn about their strengths and
their areas for improvement and use that knowledge in a powerful way.
I used the theoretical framework shown below to provide a model for the
research:
Student Intra-personal Interaction
Student
Parents
Teacher
Student Meta-cognitive Loop
with Triangle of Student/Parents/Teacher Interaction
This framework and its interpretivist perspective reflect the meaning the
students place on their analysis and their development of personal solutions to
promote their own improvement. This is represented by the student’s metacognitive loop at the top. The triangle at the bottom represents the
communication patterns between students, parents, and teachers. The test
analysis tool supports student analysis and communication at a detailed level to
and from parents and to and from the teacher as well as between parents and the
teacher. Secondarily, meta-cognitive loops could be added for the parents and
teacher showing any guidance gained by the tool for use in parenting and teaching
processes.
Note that the highlighted components supporting constructivism in Table 2.2 are:
understanding, multiple participant meanings, social and historical construction, and
theory generation. It seems to me that from a constructivism perspective, the tool should
promote student understanding of patterns and trends in addition to teacher and parent
understanding of student performance. The very foundation of test error analysis is that
teachers can’t always determine the root causes of student errors without additional
student input, thereby recognizing possible multiple participant meanings. Students are
reflecting back on errors they made, in an exercise of historical construction. Student
brainstorming sessions to provide advice on overcoming errors allow for social
construction. Furthermore, the entire study might evolve into a bit of theory generation,
if I correctly understand the meaning of any of these phrases. (If not, please enlighten
me!)
However, then I look at the advocacy/participatory column, emphasizing political,
empowerment and issue oriented, collaborative, and change oriented. Although I don’t
see political ramifications to the research, the process empowers students to analyze their
errors and learn from them, which may not be a major issue in the grand scheme of
things, but perhaps a minor one. It’s collaborative in three ways: (1) the students
collaborate to come up with ideas for preventing the errors, (2) they can tailor the tool to
their needs with “other” error types, and (3) the study may involve other teachers who
choose to use the tool. If the tool proves to be effective, it could encourage change.
Furthermore, consider the pragmatism paradigm. Its bullets are: consequences of
action, problem-centered, pluralistic, and real world practice oriented. Using the tool, the
students analyze consequences of actions (points lost for errors). It is problem-centered
in that each error is a problem and the problems are analyzed by the students to educate
the teacher and parents. It seems pluralistic in that different students can make the same
errors for different reasons and the tool helps to distinguish between those differences
(for example, if the student left a question on a test blank, was it because they
accidentally omitted the question, ran out of time, didn’t understand the question, or
what?). Finally, it seems to be real-world and practice oriented because the testing is
very real to the students and the teacher.
So, if I interpret the components of these worldviews correctly, it spans
constructivism, advocacy/participatory (though maybe not as much), and pragmatism. At
first I was thinking, am I missing something here? Is the study really straddling these
three worldviews? Do I have to pick one? Or can the study be mixed in this way, too?
Read on to see how we can resolve this discomfort.
I think the aspects of pragmatism and advocacy/participatory paradigms you associate
with your proposed research design can fit within a constructivist world-view. In other
words, referencing aspects of these three world-views can be consistent with an
overarching view of how knowledge is constructed that would fit within the
“constructivist” framework. Thank you for making sense of this stuff!
Table 2.2 on the next page further clouds my thinking. I can link elements of all
the blue text rectangles to my study, and arguments could be made for some of the others.
Worldview
Element
Ontology
(What is the
nature of
reality?)
Postpositivism
Singular reality
(e.g.,
researchers
reject or fail to
reject
hypotheses)
Epistemology
(What is the
relationship
between the
researcher and
that being
researched?)
Distance and
impartiality
(e.g.,
researchers
objectively
collect data on
instruments)
Axiology
(What is the
role of values?)
Unbiased (e.g.,
researches use
checks to
eliminate bias)
Methodology
(What is the
process of
research?)
Deductive (e.g.,
researches test
an a priori
theory)
Rhetoric (What
is the language
of research?)
Formal style
(E.g.,
researchers use
agreed-on
definitions of
variable)
Constructivism
Multiple
realities (e.g.,
researchers
provide quotes
to illustrate
different
perspectives)
Advocacy and
Participatory
Political reality
(e.g., findings
are negotiated
with
participants)
Pragmatism
Singular and
multiple
realities (e.g.,
researchers test
hypotheses and
provide
multiple
perspectives)
Closeness (e.g., Collaboration
Practicality
researchers visit (e.g.,
(e.g.,
participants at
researchers
researchers
their sites to
actively involve collect data by
collect data)
participants as
“what works”
collaborators)
to address
research
question)
Biased (e.g.,
Biased and
Multiple
researchers
negotiated (e.g., stances (e.g.,
actively talk
researchers
researches
about their
negotiate with
include both
biases and
participants
biased and
interpretations) about
unbiased
interpretations) perspectives)
Inductive (e.g., Participatory
Combining
researchers start (e.g.,
(e.g.,
with
researchers
researchers
participants’
involve
collect both
views and build participants in
quantitative and
“up” to
all stages of the qualitative data
patterns,
research engage and mix them)
theories, and
in cyclical
generalizations) reviews of
results)
Informal style
Advocacy and
Formal or
(e.g.,
change (e.g.,
informal (e.g.,
researchers
researchers use research may
write in a
language that
employ both
literary,
will help bring formal and
informal style
about change
informal styles
and advocate
of writing)
for participants)
So…what do you think? I feel like this theory stuff really bogs me down. Is that
normal?!!! 
You have to keep in mind that the above categorization is the interpretation that these
particular authors give to these different world-views. Okay! Other researchers may
differ in the way they would interpret how each world-view relates to each of these
elements! I guess reading the other book next semester will give me a broader
perspective.
Okay, so let’s move on to the four elements basic to any research process. They
reprinted Crotty’s (1998) famous table in Table 2.3 and recommended adding mixed
methods research as a methodology as shown.
Table 2.3 The Four Elements Basic to Any Research Process
Epistemology
Objectivism
Theoretical
Perspective
Positivism (and
postpositivism)
Methodology
Experimental
research
Methods
Sampling
Constructivism
Interpretivism
- Symbolic
interactionism
- Phenomenology
- Hermeneutics
Survey research
Measurement and scaling
Subjectivism
(and its variants)
Critical inquiry
Ethnography
Questionnaires
Feminism
Phenomenological
research
Observation
- Participant
- Non-participant [parents]
Postmodernism,
etc.
Grounded theory
Interview
Heuristic inquiry
Focus group
Action research ?
Case study
Discourse analysis
Life history
Feminist standpoint
research, etc.
Narrative
Add:
Mixed methods
research
Visual ethnographic
methods
Statistical analysis
Data reduction
Theme identification
Comparative analysis
Cognitive mapping
Interpretative methods
Document analysis
Content analysis
Conversation analysis, etc.
I really take issue with this table. To list “survey research” and “measurement and
scaling” as the only methodology and methods related to constructivism indicates a total
lack of understanding of constructivism. I would argue that constructivism could
embrace every methodology listed in this table, as well as some that are not listed (e.g.
teaching experiment, classroom teaching experiment). I would also argue that a
constructivist research project could make use of almost all of the methods listed except
for possibly the “measurement and scaling” (listed under constructivism!) and possibly
the “sampling” technique, if what is meant by this is the “randomized assignment of
individuals to experimental and control groups.” Thanks, again for your voice of sanity.
I have found this table confusing since I took that first qual course a few years ago, and
what you’re saying really helps me to find peace with these issues!
Worldviews and Mixed Methods Research
The authors suggest that I might convey my stances on my worldviews in a
section titled “Philosophical Assumptions” or in the methods section of the dissertation.
You probably can advise me better about UGA’s expectations, and I’m sure they’ll come
clear to me next year in EMAT 9630 and 9640. (I hope so!)
There are three different stances discussed in the mixed methods literature.
Stance 1. There is one “best” worldview that fits mixed methods research.
Tashakkori and Teddlie indicate that at least 13 authors dub that to be
pragmatism. Reasons are:
1. Both quant and qual research methods may be used in a single study.
2. The research question should be of primary importance --- more
important than either the method or the philosophical worldview that
underlies the method.
3. The forced-choice dichotomy between postpositivism and
constructivism should be abandoned.
4. The use of metaphysical concepts such as “truth” and “reality” should
also be abandoned.
5. A practical and applied research philosophy should guide
methodological choices.
However, Tashakkori and Teddlie also mention one other “best” worldview: the
transformative-emancipatory paradigm, another term for the advocacyparticipatory approach. That may explain why I have four table entries from that
column highlighted in Table 2.2, further muddying the water.
Stance 2. Researchers can use multiple paradigms or worldviews in their mixed
methods study. This “dialectical” perspective (Greene & Caracelli, 1997, 2003)
recognizes that contradictions and tensions reflect different ways of knowing
about and valuing the social world and says you can use multiple paradigms. Ah,
I’m starting to like these people.
Stance 3. Worldviews relate to the type of mixed methods design and may vary
depending on the type of design. I could also agree with this stance. …further
adding to the nebulosity of all this theory! 
Okay, so what are you thinking so far? Does it look like my study mixes constructivism,
advocacy/participatory, and pragmatism paradigms?
Yes, but see my remarks about this mixing being consistent with a more encompassing
view of constructivism. I like the way you think!
The Basics of Quantitative and Qualitative Research
Table 2.4 provides a nice set of continua showing elements where quant and qual
research differ in the basic intent and implementation of the research.
Table 2.4 Elements of Qual and Quant Research in the Process of Research
Elements of Qualitative
Elements of Quantitative
Process of
Research Tend Toward…
Research Tend Toward…
Research
- Understanding meaning
- Test a theory deductively to
Intent of the
individuals give to a
support or refute it
research
phenomenon inductively
- Minor role
- Major role
How literature is
- Justifies problem
- Justifies problem
used
- Identifies questions and
hypotheses
- Ask open-ended questions
- Ask closed-ended questions
How intent is
- Understand the complexity of focused
- Test specific variables that
a single idea (or phenomenon)
form hypotheses or questions
- Words and images
- Numbers
How data are
- From a few participants at a
- From many participants at
collected
few research sites
many research sites
- Studying participants at their
location
- Text or image analysis
- Themes
- Larger patterns or
generalizations
- Identifies personal stance
- Reports bias
- Uses validity procedures that
rely on participants, the
researcher, or the reader
How data are
analyzed
Role of the
researcher
How data are
validated
- Sending or administering
instruments to participants
- Numerical statistical analysis
- Rejecting hypotheses or
determining effect sizes
- Remains in background
- Takes steps to remove bias
- Uses validity procedures based
on external standards, such as
judges, past research, statistics
Notice the blue text on each side indicating the tendencies of the test error analysis
research. I’m thinking that the intent of my research is about understanding the meaning
individuals give to their error causes and the tool/process itself (qual). There’s not a lot
of closely related lit, so I see it as having a minor role (qual). The tool and its related
process are a combination of open- and closed-ended questions (both). Data will be from
administered instruments, as well as journals, questionnaires, and informal interviews:
numbers and words from 80 – 130 students at one site (both). Data will be analyzed
numerically and with themes, looking for large patterns and generalizations (both). I’ll
do my best to reveal my biases and try to remove them (both). Data will be evaluated by
the participants and the researcher as well as stats (both). So I figure it starts out looking
“qual”-ish, but finishes with a strong “both”. Does that make sense?
Yes! I would add, that from the qualitative aspect of your proposal, you might want to
consider the notion of “viability” rather than “validity” (see von Glaserfeldt for a
discussion of “viability” versus “validity”). I’m looking at a 2001 von Glasersfeld article
that “suggests the substitution of ‘viability’ or ‘functional fit’ for the notions of Truth and
objective representation of an experiencer-independent reality.” I’m not sure how this
idea affects the study. Please say more!
Table 2.5 matches different types of research problems with methods designs.
Table 2.5 Types of Research Problems and Matching Methods or Designs
Types of Methods (Designs)
Type of Research Problem
Suited to Studying the Problem
Need to see if a treatment is effective
Experimental design
Need to see what factors influence an outcome
Correlation design
Need to identify broad trends in a population
Survey design
Need to describe a culture-sharing group
Ethnography design
Need to generate a theory of a process
Grounded theory design
Need to tell the story of an individual
Narrative research
It looks like we’re dealing with an experimental design.
I’m still not sure how you are going to indicate “effective” with your proposed research.
Are you tracking errors made over several test-taking sessions, and looking for a
reduction in errors in the group using your TEA as compared with a (similar?) group not
using TEA? I wasn’t really planning on a control group since there are so many
uncontrolled variables between classes and students. What is the “treatment” in your
experimental design? I was thinking that all of the students could perform test error
analysis and we (students, teacher, and parents) would give our opinions of the process
and its helpfulness. Error types are categorized into testing process errors and content
errors. I’d like to see if there’s a reduction in testing process errors. I’m not sure it’s fair
to look at the same think with content errors since the content is ever-changing and
growing in complexity. Is “effective” the wrong word to use?
If you are not intending to conduct a “comparison between treatment and control groups”
then I would hesitate to classify your research problem as “needing to see if a treatment is
effective.” It could still be “experimental” in design, but with a different kind of research
problem (in other words, the above matches are over-simplified and restricting). Yes, yes,
yes! In some ways I think you are trying to generate a theory of a process. I basically
want to know if I would recommend it to other teachers or not and under what
circumstances. What happens when students are given the opportunity to analyze their
test errors in a systematic way? (grounded theory) How do students (and parents and
teachers) react to this process? (ethnographic and/or narrative). You do intend to conduct
an “experiment” in the sense that you will be trying out your TEA with a large number of
students, … yes … but will you also collect data from a similar group of students NOT
using TEA? I don’t think so.
The following are situations that they say lead to mixed methods designs:
1. A need exists for both quant and qual approaches. The
combination…provides a more complete picture by noting trends and
generalisations as well as in-depth knowledge of participants’ perspectives. One
form of [data]… might contradict the other. [Together they] can clarify subtleties,
cross-validate findings, and inform efforts to plan, implement, and evaluate
intervention strategies.
2. A need exists to enhance the study with a second source of data.
3. A need exists to explain the quant results (e.g., understand social interactions).
4. A need exists to first explore qualitatively.
Multiple reasons might …exist, and we recommend that investigators first ask
themselves what all the reasons are for using mixed methods research and then
specifically state these reasons clearly in their study.
I think my study falls in the first situational category. I’d like to know quantitatively the
frequencies of error types and qualitatively what the participants [students, teacher(s),
parents] think of the process.
I agree. So this is not going to be a “treatment-control” comparison experiment. True.
Download