Validity - James Madison University

advertisement
Validity
Introduction to Communication Research
School of Communication Studies
James Madison University
Dr. Michael Smilowitz
1
Reliability and Validity
2
• Remember the distinction:
– Reliability assesses the internal property of the
consistency of the measurement.
– Validity assesses the external property of the
accuracy of the measurement.
A measuring instrument is considered valid
when it truly and accurately measures the
construct it purports to measure.
What is a valid measurement?
3
• Validity cannot be claimed for an unreliable
measurement instrument.
• But demonstrating the reliability of an
instrument does not also demonstrate its
validity.
• The validity of a measuring instrument is
therefore assessed separately from the
assessment of its reliability.
How is validity assessed?
4
• There are different types of validity
assessments.
– Each type differs in what is emphasized as
important to achieving accurate measurement.
– Each type differs in the methods of doing the
assessment.
(The material to follow is drawn largely from Wimmer
and Dominick, 1994).
– There are three broad categories of validity
assessments: (1) Judgment-based validity, (2)
Criterion-based validity, and (3) Theory-based
validity.
Judgment -Based Validity Assessments
There are two types of judgment-based
validity assessments:
1. Content validity (sometimes called facevalidity)
2. Expert jury validity
5
Judgment -Based Validity Assessments
6
• Content validity
– Assesses whether the instrument, on face value,
provides an adequate number of representative
empirical indicators.
– Researchers provide arguments to support
their claims that an instrument is measuring
the appropriate empirical indicators.
Judgment -Based Validity Assessments
7
Content validity
When students complain
that a final exam contained
material that was not
covered in the course, they
are complaining about the
exam’s content validity?
Why?
The valid empirical
indicators would be their
responses to questions based
on what was covered in the
course.
If researchers believe that
communication competence
requires both knowledge of
communication principles,
and the ability to demonstrate effective skills, a
paper and pencil measure of
someone’s communication
knowledge could not be said
to meet the assessment of
content validity. Why?
Judgment -Based Validity Assessments
8
Content validity
When students complain
that a final exam contained
material that was not
covered in the course, they
are complaining about the
exam’s content validity?
Why?
The valid empirical
indicators would be their
responses to questions based
on what was covered in the
course.
If researchers believe that
communication competence
requires both knowledge of
communication principles,
and the ability to demonstrate effective skills, a
paper and pencil measure of
someone’s communication
If
both knowledge
and
knowledge
could not
beskills
said
are
necessary,
the paperofand
to meet
the assessment
pencil
to
contentmeasure
validity.fails
Why?
provide adequate empirical
indicators.
Judgment -Based Validity Assessments
• Expert jury validity
– Very similar to content validity.
– The researcher asks a group of experts on the
subject matter to examine the measuring
instrument and judge whether, in the expert’s
opinion, the instrument accurately measures
what it purports to measure.
9
Judgment -Based Validity Assessments
• Expert jury validity
Let’s
say a similar
researcher
interested
in measuring “work
– Very
to iscontent
validity.
group cohesiveness.” To do so, the researcher develops
– The researcher asks a group of experts on the
five questions to measure group member’s perceptions of
subject
to examine
the measuring
the groups
(1)matter
friendliness,
(2) helpfulness,
(3) expressions
instrument
and
expert’s
of personal
interest,
(4) judge
level ofwhether,
trust, andin
(5)the
willingness
opinion,
theThe
instrument
measures
to work
together.
researcheraccurately
sends the questionnaire
to five
experts
on group communication,
what
it purports
to measure. and asks them
to evaluate whether the questionnaire will provide
adequate and representative empirical indicators of work
group cohesiveness. In reporting the research, the
researcher indicates that a panel of expert judges
regarded the instrument as a valid measuring device.
1
0
Criterion-based Validity Assessments
1
1
• Criterion-based validity assessments
involve assessing an instrument’s relation to
some criterion assumed relevant to the
construct being measured.
• Three methods for assessing criterion-based
validity.
– Predictive validity.
– Concurrent validity.
– Known-groups validity.
Criterion-based Validity Assessments
• Predictive validity
– The assessment is based on the instrument’s
ability to accurately predict important
behavioral manifestations of the construct
being measured.
• Does the SAT (Scholastic Aptitude Test) accurately
predict a student’s academic success at college?
• Do responses to public opinion polls accurately
predict voter behavior?
• Do responses to corporate image inventories
accurately predict behaviors such as purchasing,
favorable press mentions, public support?
1
2
Criterion-based Validity Assessments
1
3
• Concurrent Validity
– Concurrent validity arguments are based on
the correlation between a new measure and an
already validated measure of a similar
construct.
– Concurrent validity is frequently used when
researchers are extending a particular research
area with updated measures.
Criterion-based Validity Assessments
1
4
say Smilowitz Validity
wants to develop a new instrument to
•Let’s
Concurrent
measure the willingness of work team members to engage in
– Concurrent validity arguments are based on
constructive conflict with their colleagues. He reviews the
the and
correlation
between
new measure
andis an
literature,
finds that his
notion ofa constructive
conflict
validated
measure
of a argumentativeness
similar
similaralready
(but different)
from Infante’s
(1992)
scale. construct.
To argue for the validity of his instrument, he asks a
number
of work teamvalidity
members is
to frequently
complete his used
new when
– Concurrent
questionnaire, and also complete Infante’s scale. If Smilowitz
researchers are extending a particular research
finds that the two instruments are highly correlated, he can make
area with
measures.
an argument
for theupdate
criterion-based
validity of his instrument.
Criterion-based Validity Assessments
1
5
• Known-groups validity
– This technique is similar to predictive validity
assessments.
– The instrument is administered to a group of
subjects known to have the empirical indictors
associated with the construct under
investigation, and a group that is known to lack
those same indicators.
– If the results of the measurements are
statistically significantly different, the
instrument is said to validly discriminate.
Criterion-based Validity Assessments
1
6
• Known-groups validity
– This technique is similar to predictive validity
assessments.
A researcher
is developing an instrument to measure
aggressiveness
in elementary
age children. to
A group
of of
– The instrument
is administered
a group
children
who are
frequently
detained
school, and
a group
subjects
known
to have
the after
empirical
indictors
of students identified by their teachers as “model children” are
associated with the construct under
both measured by the instrument. If a significant difference is
investigation,
a group
that is
known to lack
found,
the researcherand
claims
known-group
validity.
those same indicators.
– If the results of the measurements are
statistically significantly different, the
instrument is said to validly discriminate.
Theory-based Validity Assessments
1
7
• Generally, theory based validity assessments
are known as construct validity.
• There are, however, different conceptions of
the meaning and application of construct
validity.
Theory-based Validity Assessments
1
8
•Construct validity
Smith (1988) describes construct validity as a
more comprehensive and robust concept than
the other types of validity assessments.
“Because of its global emphasis on the goodness-of-fit
between a measuring instrument and a construct’s
theoretical properties, construct validity is by far the
most important....Indeed, if an instrument has construct
validity, it is considered valid from content and criterion
perspectives as well.”
Theory-based Validity Assessments
1
9
•Construct Validity
Reinard (1998) also regards construct validity as
the most important, but regards its application as
more methodologically based than theoretical.
“Construct validation requries that a new measure be
administered to subjects along with at least two other
measures, one of which is a valid measure of a
construct that is known conceptually to be directly
related to the new measure, and another one which
should be a valid measure of a construct that is known
conceptually to be inversely related to the construct of
interest.”
Theory-based Validity Assessments
2
0
•Construct Validity
Wimmer and Dominick (1994) liken construct
validity to discriminant validity.
“...construct validity involves relating a measuring
instrument to some overall theoretic framework to ensure
that the measurement is actually related to other concepts
in the framework. Ideally, a researcher should be able to
suggest various relationships between the property being
measured and other variables.”
Download