Student Evaluation of - Institutional Effectiveness & Analysis

advertisement
Research in Practice: Using Better
Research Design and Evidence to
Evaluate the Impact of Technology on
Student Outcomes
Gail Wisan, Ph.D.
University Director of Assessment
Institutional Effectiveness and Analysis
Florida Atlantic University
Presented at the FAU Faculty Technology
Learning Community
Boca Raton, Fl
November 12, 2010
Department of Education Evaluates
Evidence
Perspective/ point of view:
Evaluation Research should drive
outcomes assessment because:



it helps identify what works;
it provides direct evidence;
it helps improve educational
outcomes.
Overview of Presentation:
Benefits/Learning Outcomes
1. Be Able to explain evaluation research;
2. identify the benefits of evaluation research;
3. Able to explain use of experimental and quasiexperimental design evaluation research in
education assessment;
4. Able to apply evaluation research strategies to
outcomes assessment at your institution to
improve student learning outcomes.
Outcomes Assessment and
Evaluation Research
Outcomes Assessment, at its most effective,
incorporates the tools and methods of
evaluation research.

1. Outcomes Evaluation Research

2. Field Experiment Research
Outcomes Assessment and
Evaluation Research
Field Experiment Research assesses the
effects of new programs, pedagogies, and
educational strategies on students’ learning,
competencies, and skills
Outcomes Assessment and
Evaluation Research
Outcomes Evaluation Research assesses
the effects of existing programs,
pedagogies, and educational strategies on
students’ learning, competencies, and skills
Outcomes Assessment and
Evaluation Research
 Evaluation Research can answer the
question:
How can Assessment Improve Education?
Research Design
Examples: Overview
Notation: X, O, R
Experimental Design
Pre-Experimental Design and its problems in
educational research
1. Threats to internal validity (Is X really
having an effect?)
2. Threats to External Validity
(generalizability)
Research Design Examples:
Quasi-Experimental Designs Versus
Pre-Experimental Designs
QUASI- Experimental Designs Better
Answers
1. Better Solutions to internal validity
threats (Is X really having an effect?)
2. Better Solutions to external validity
threats (generalizability)
Notation on Diagrams
An X will represent the exposure of a group to an
experimental variable or teaching method, the
effects of which are to be measured.
O will refer to observation or measurement.
R refers to a random assignment.
Research Design
How Quasi-experimental Design helps to
solve the problems of Pre-experimental
Design
Experimental Designs
Pretest-Posttest Control Group Design:
Random assignment to two groups
R O X O
R O
O
Experimental Designs
Pretest-Posttest Control Group Design
R O X O
R O
O
Sources of Invalidity
External
Interaction of Testing and X
Interaction of Selection and X ?
Reactive Arrangements ?
Experimental Designs
Posttest-Only Control Group Design
R
R
X O
O
Experimental Designs
Posttest-Only Control Group Design
R
R
X O
O
Sources of Invalidity
External
Interaction of Selection and X ?
Reactive Arrangements ?
Pre-Experimental Designs
One-Shot Case Study
X O
Sources of Invalidity
Internal
History
Maturation
Selection
Mortality
External
Interaction of Selection and X
Pre-Experimental Designs
One-Group Pretest-Posttest Design
O X O
Sources of Invalidity
Internal
History
Maturation
Testing
Instrumentation
Interaction of Selection and Maturation, etc.
Regression ?
External
Interaction of Testing and X
Interaction of Selection and X
Reactive Arrangements ?
Pre-Experimental Designs
Static-Group Comparison
X O
O
Sources of Invalidity
Internal
Selection
Mortality
Interaction of Selection and Maturation, etc.
Maturation ?
External
Interaction of Selection and X
Threats to Internal
Validity
History, the specific events occurring
between the first and second measurement
in addition to the experimental variable.
Maturation, processes within the
respondents operating as a function of the
passage of time per se (not specific to the
particular events), including growing older,
growing hungrier, growing more tired etc.
Testing, the effects of taking a test upon the
scores of a second testing.
Threats to Internal
Validity
Instrumentation, in which changes in the
calibration of a measuring instrument or
changes in the observers or scorers used,
may produce changes in the obtained
measurements.
Regression. This operates where groups
have been selected on the basis of their
extreme scores.
Threats to External
Validity
Interaction of Testing and X. A pretest might
increase/decrease the respondent’s
sensitivity or responsiveness to the
experimental variable, making the results
obtained for a pretested population
unrepresentative for the unpretested
universe from which the respondents were
selected.
Interaction of Selection and X
Threats to External
Validity
Reactive Arrangements. This would preclude
generalization about the effect of the
experimental variable upon persons being
exposed to it in nonexperimental settings.
Multiple-X Interference. This is likely to occur
whenever multiple treatments are applied to
the same respondents, because the effects
of prior treatments are not usually erasable.
Threats to Internal
Validity
Selection. There could be biases resulting in
differential selection of respondents for the
comparison groups.
Mortality. This refers to differential loss of
respondents from the comparison groups.
Interaction of Selection and Maturation, etc.,
which in certain of the multiple-group quasiexperimental designs might be mistaken for
the effect of the experimental variable.
Quasi-Experimental
Designs:
Nonequivalent Control
Group Design
O X O
O
O
Quasi-Experimental
Designs:
Nonequivalent Control
Group Design: Comparing
Math Classes Example
O X O
O
O
Quasi-Experimental
Designs
Nonequivalent Control Group Design
O X O
O
O
Sources of Invalidity
Internal
Interaction of Selection and Maturation, etc
Regression ?
External
Interaction of Testing and X
Interaction of Selection and X ?
Reactive Arrangements ?
Examples of Other QuasiExperimental Designs
Time Series
O O O OXO O O O
Multiple Time Series
O O O OXO O O O
O O O O O O O O
Quasi-Experimental
Designs
Time Series
O O O OXO O O O
Sources of Invalidity
Internal
History
Instrumentation ?
External
Interaction of Testing and X
Interaction of Selection and X ?
Reactive Arrangements ?
U.S. Dep’t. of Ed Focuses on
Level of Evidence
U.S. Department of Education highlights
“What Works” in educational strategies;
“What works” is based upon assessment of
level of evidence provided by educational
research: evaluation research
Department of Education Evaluates
Evidence
General Education & Learning Outcomes Assessment:
The National Context
At the National Symposium on Student Success,
Secretary of Education Margaret Spellings and
others called on colleges to measure and provide
evidence of student learning.
“Measuring Up”-National Report Cards By State:
Little Data on Whether students are Learning
Outcomes assessment has two purposes
• Accountability (standardized national tests?)
• Assessment/Effectiveness
—Are Students Learning? How much?
Performing Assessment as
Research in Practice
Assessment should seek systematic evidence
of the effectiveness of existing programs,
pedagogies, methodologies and approaches
to improve student learning outcomes and
instill a cycle of continuous improvement.
Implementation Strategy: Aim for QuasiExperimental Designs (or Exp. Designs)
Revitalizing Assessment:
Consider these Next Steps
Encourage comparing teaching
strategies when faculty are teaching
more than one section of the same
course
Communicate and Use Results
QUESTIONS?
Please email
gwisan@fau.edu
Download