Class Notes 101310

advertisement
Class Notes 10/13/10
Instruments and Application
Does the instrument:
 Fit the population?
 Address the outcome of interest?
 What about participant burden?
Research Concepts:
What is the difference between concept and variables?
Concept is theoretical whereas the variables are the operationalization of this. This is where the
measurement piece is valuable. Think—critical attributes.
Trait Variable: Personality, something that is stable and not changeable
State Variable: is changeable by intervention
Conceptualization:
 Clarifies
 Distinguishes
 Defines
theoretical constructs
How many dimensions (sub-concepts) are necessary in your research? Ex. QOL requires multidimensional evaluation (psychological, financial, physical, etc.)
Critical Attributes:
Critical elements, important dimension of the concept
Proxy approach: identifying a surrogate variable to utilize for measurement ex. SES (using educational
level), acculturation—there are a lot of critical attributes (language proficiency, length of time in
country, food preference, social network, etc.) but you’re probably not going to be able to measure all
of this and set yourself up for measurement error so you pick a proxy such as language proficiency as
your proxy attribute as a measure ,assuming that language proficiency is correlated with acculturation
Critical Attributes continued…
Ex. Glasses
 Functional attribute: visual aid
 Physical attribute: lens
 Is the frame color an attribute? Probably not, as this variable and not imperative in defining the
concept of “glasses”.
Construct Validity: Are we measuring what we are supposed to be measuring?
Measurement Theories:
What is measurement theory? Allows for safe acquirement and reproducibility of measurement
characteristics.
Random Error: the manner in which a measure is coded, characteristics or state of the subject or
respondent, chance factors, characteristics of the measurement device
Classic Measurement Theory: the distribution of random errors are expected to be normally distributed
In classical measurement theory you just have truth and error
Multi-trait Multi-method theory (MTMM): a method for assessing construct validity, multiple traits and
methods, a matrix of correlation coefficients
In contrast to classic measurement theory you further examine the traits (multiple) and associated
errors in different methods; GOAL=assessment of the existing methods to evaluate the traits using
correlational coefficients
Item-response Theory: a statistical theory based on a mathematical model, calculates the probability of a
particular response to a scale (ex. NCLEX); IRT is common in a lot of educational research
Construct Validity: Are we measuring what we are supposed to be measuring?
IRT assumes that every question is independent and has non-linearity on the attribute and observed
score
Guessing Parameters: Are associated with the number of variables (item structure) vs. the individual
FOUNDATION OF PSYCHOMETRIC ANALYSIS:
What is reliability?
 Dependability, accuracy, stability, consistency
 Is there temporal stability of the construct? Use test restest to evaluate this—conventional time
for retest is 2 weeks
What does a Cronbach of 0.75 mean? How correlated are each of items in the scale if the total alpha is
0.75 this is the consistency of the items as a whole; if everything were tight and valid you should have an
alpha of 1. Now you need to statistically investigate each item (on item to total and alpha if deleted);
Overall the cronbach alpha is an estimate of error in that measurement
Face Validity: basic, does this seem to measure what it appears to?
Content Validity: The representativeness or sampling adequacy of the content; can use panel of experts
and query on the validity of the items in relationship to the scale, readability, cultural appropriateness,
language, etc.
Construct Validity: Theoretical notions. Spells out the theoretical construct (concept analysis). States
how each should be related. Develop measures for the construct. Conduct studies to see whether the
observed relationships between the measures and the stated theories.
Kinds of construct validity:
 Convergent validity: If 2 very similar constructs are measured and they have to have a
converging measure (ex. Fatigue and the correlation with depression—creating a new fatigue
scale you need to think a priori about depression (similar) and look at the convergent factor by
addressing correlation; Convergent validity shows that the assessment is related to what it
should theoretically be related to
 Discriminant validity: Discriminant validity describes the degree to which the operationalization
is not similar to (diverges from) other operationalizations that it theoretically should not be
similar to ex. Depression should not converge with happiness
Download