Slides

advertisement
POL 242
Introduction to Research Methods
Assignment Five Tutorial
Indexes
July 12, 2011
Anthony Sealey
anthony.sealey@utoronto.ca
http://individual.utoronto.ca/sealey
Agenda
(1) Introduction
• (2) Example
(3) Exercise
Introduction
Please be sure to sign the
attendance form.
Example
Research Questions
“Old” and “New”
Research Questions
Left-Right Cleavages
Smiling Stephen
Smiling Jack
Smiling Liz
Example: ‘Old Politics’ Index
Research Questions
Go to:
http://individual.utoronto.ca/sealey
Go to:
Teaching/POL 242/
Assignment Five Tutorials/Code
… and of course, go to Webstats!
Research Questions
Recall that in the tutorial for
assignment one, we considered
attitudes towards citizens’ attitudes
about the competence of the two most
successful political leaders in the most
recent election: Harper and Layton.
Here, we are going to learn how to
make an index, out of these 2 variables
(plus 3 others).
Let’s begin using the point and click
method, which gets us set to build on
the syntax.
First, begin by selecting the CES 2004
data set and choosing the ‘Frequencies’
type of analysis.
Continuing with the ‘point and
click’ method, select the ‘where
would you rate Stephen Harper’
variable and ‘all’ statistics.
This is exactly the same process
used to run frequency
distributions.
After running the output we can
then start building our index
using syntax by selecting ‘here’
After selecting ‘here’ in the output page
we are taken to the syntax page where we
can edit and add code.
Here, insert the code I’ve prepared that
eliminates missing values and recodes
variables so that they are oriented in the
proper direction. In order to ensure that
all measures contribute equal weight to
the index, we recode so that each ranges
from 0 to 10 as the first variables (leader
competency scores) did.
You can download the code here:
http://individual.utoronto.ca/sealey/Site
/POL_242_files/POL242IndexesStatsCoding.rtf
As we go through the syntax process, you
will notice that I’ve made notes to denote
each of the measures used, as well as the
final steps in the analysis. For example,
for the ‘job creation’ variable, notice that
it’s has ‘/* ** --- ** */’ surrounding it. This
helps us to organize the analysis and
remember what we’re doing at each
stage. As long as we use asterisks (***) ,
Webstats will ignore the note when
running the commands.
After entering the syntax that explains to
Webstats how to declare missing values and
recode each of the variables, here we find the
syntax used to run the reliability test.
Remember, the reliability test (or Cronbach’s
Alpha) is a test we use to see how related each of
our individual measures are to each other.
The actual syntax will always follows the same
basic format; all you have to do is copy the
syntax and insert the variables you are using
(instead of ‘shcomp’, ‘jlcomp’, ‘conbb’, ‘conun’ and
‘jcpriv’).
When eventually running your commands,
review what the Cronbach’s Alpha result tells
you. Do your variables seem to fit together? If
not, have you coded them properly, so they are
all measuring the same direction? Is there
anything else we might think is going on?
Finally, we can create our new index variable (here
called “OldLRI”). We can also tell Webstats to run
our new index variable, along with the descriptive
statistics we think useful in describing the index.
Here we focus on the reliability (or Cronbach’s
Alpha) output.
We want to focus on three things. First, the
overall Alpha score. If the score is around
0.50, we are pretty confident that our
variables are inter-connected and measuring
some broader concept. Here, the score is a
little bit lower, indicating that we have reason
to be concerned about whether appropriately
questions measure the same underlying
concept.
Second, if our Alpha score is low, we want to
check if we can increase our score by deleting
a variable. We do so, by looking at the “Alpha if
Item Deleted” column. Here, we could improve
our measure by deleting the ‘Jack Layton
competency dimension’.
Third, to make sure our variables are all going
in the right direction (ex. all measuring high
levels of knowledge, we can look at the
“Corrected Item-Total Correlation”. If any of
the values are negative, we need to go back
and check our coding.
Finally, we come to the frequency
distribution of our index.
Once we create an index variable (or
conceptual variable) we now describe
it in broad terms (such as old politics
left-right), rather than the individual
parts (such levels of confidence in big
business or unions). Notice that the
index ranges from 0 to (about) 50.
Those with higher scores (say 40 to
48) can be described as being very ‘old
politics right’ and those with very low
scores (say 0 to 10) can be described
as being very ‘old politics left’.
Exercise
Research Questions
Option A: Using the same data set,
create a better ‘old politics’ index.
Option B: Using the same data set,
create a ‘new politics’ index.
Option C: Using any data set you
like, create an index for any
concept that you like.
Download