Instrumentation

advertisement
Research Design Part 2
Variability, Validity, Reliability

Selecting an Instrument
 Reliability & validity of the instrument
 Are the subject characteristics the same
 Is the instrument the best one? Why?
 Purchase cost?
 Availability? Copyright?
 Simplicity of administering & scoring
 Should you use an existing
instrument, modify an
instrument, develop a new
instrument?
Existing, Modifying, New Instruments
 Pros
 Quick to use
 Have established R & V
 Can build on knowledge base est. with instrument
 Cons
 May not “fit” research question exactly
 May require training for administration, scoring, or analysis
 May incur cost to purchase, score, or analyze…MBTI
 May be too long for the purposes at hand, take too much time to
complete
Existing, Modifying, New Instruments
 Pros
 Can be modified to better suit research question
 Most of the work of creating the tool has been completed
 May be able to compare some results with previous results
 Cons
 Changing a known quantity into something unknown
 Previous reliability and validity indicators may no longer apply
Existing, Modifying, New Instruments
 Pros
 Can develop instrument to fit specific need
 Instrument itself may make a significant contribution to the field
of research
 Cons
 Requires time, effort, resources, expertise
 Requires knowledge of scale development procedures
 Runs risk that instrument will not be reliable or valid for purpose
at hand
Modifying or Developing a New Inst.

Determine what behaviors/traits to measure


Intrinsic vs. extrinsic rewards
Review the literature to determine how traits are measured

Herzberg; rec literature

Consult experts in the field to review the instrument

Pilot test



Clarity, ambiguity, time of completion, directions
Revise
Check for a distribution of scores

Is there a problem with question or lack of variance on the item
Design Validity

Design Validity
 4 types
 Statistical conclusion validity
 Construct validity
 Internal validity
 External validity
Statistical Conclusion Validity
 Accurate determination of whether a relationship exists
 Inflated error rates from multiple tests
 Extraneous variance
 Low statistical power
Construct Validity
 Degree to which a test/ measurement measures a hypothetical
construct
 Threats
 Using one method to measure the construct
 Inadequate explanation of a construct
 Measuring just one construct & making inferences
 Using 1 item to measure personality
External Validity
 Generalizability of the results
 Population external validity
 Characteristics & results can only be generalized to those with similar
characteristics
 Demographics
 Psych experiments with college students
 Ecological external validity
 Conditions of the research are generalizable to similar characteristics
 Physical surroundings
 Time of day
Internal Validity
 Internal validity is strongest when the study’s design (subjects,
instruments/measurements, and procedures) effectively
controls possible sources of error so that those sources are not
reasonably related to the study’s results.
Internal Validity
 History
 Extraneous incidents/events that occur during the research to
effect results
 Only impacts studies across time
 Attendance at football games/coaching change
 Selection
 If there are systematic differences in groups of subjects
 Gender
 Compare GRE scores & grad school performance between sequences
 Occurs when random sampling isn’t used
Internal Validity
 Statistical regression
 If doing pre-test/post-test those
scoring extremely high or low on
first test will often “regress to
the mean” on the second test
 Scoring based more on luck than
actual performance
 The regression effect causes the
change & not the treatment
 Don’t group the high/low scores
for the post-test
Internal Validity
 Pre-testing
 Pre-test can increase/decrease motivation
 Gives subjects opportunities to practice
 Practice can be a positive so they get a true score
Internal Validity
 Instrumentation
 Changes in calibration of the exam
 GRE – new & old
 Changes in observer scoring
 Fatigue/ boredom
 American Idol
 Attrition/Mortality
 Subjects drop out/lost
 Low scorers on GRE drop out of grad school school
Internal Validity
 Experimenter effect
 Presence, demeanor of researcher impacts +/ Course instructor is PI
 Course evals
 Subject effect
 Subjects’ behaviors change because they are subjects
 Subjects want to present themselves in the best light
 Hawthorn effect
Practice problems

Download