Research Design - admn502a2010a01

advertisement
Week 6: Five Common Research Designs in Public Administration Part I
Graeme Scott - ADMN 502A (A01)
October 13th, 2010
Part I: Meta Issues in Research Design
-Regardless of which design you choose, consider these issues
- depends on your philosophy and what you intend to achieve
Overview
-Purpose & Function of Research Design
-Purpose of Research
-Causality
-Unit of Analysis
-Time
-Boundaries
-Sampling
-Nonprobability Sampling - More relevant for 502a
-Probability- More relevant for positivist framework
-Costs and benefits of the two
Purpose of Research design
-What design will provide the best evidence?
-Within the research constraints?—Primary constraints: Time and Money
-What evidence do you need to answer the question you have posed
-How many people engaged in tax evasion, you want to describe and count something
-What if you want to know their motivations? Explore their subjective impressions
-Taxes cause tax evasion. Does GST cause an increase in tax evasion?
-Explain the relationship. Determine causality using a Natural Experiment
-Design your research to design the question you posed
-Easier to rewrite question than redo the research
-598s often find evidence to answer a question they didn’t ask
Functions of Research design
1. To develop or conceptualize an operational plan
2. Ensure the procedures adopted within the plan are adequate to provide valid, objective and
accurate solutions
3.
Evidence collected enables the researcher to answer the initial research question as
unambiguously as possible
-What type of evidence is needed to answer the question in a convincing way?
-Meta Issues: When designing a building, materials and schematics are needed.
Who will use building and for what? This is what the design flows out of.
-Used to structure research and show how parts fit together
-Question is of vital importance to research design
Purpose of Research
1)Exploration
-Think about research questions and what they’re saying
- Exploratory research is like scoping reviews. Not comprehensive or exhaustive, but familiarizes
self/clients with topics/issues
- Can help determine data collection method. Set-up for descriptive/explanatory research
- Provides insight, but you can’t make decisions from it. Can’t often can generalize from it
2)Description
- Situations and events. Who, what, where, when, how. How often. How many.
- Can’t give the cause of the phenomena you’re counting. Don’t go down the causality aisle.
- This is fundamental to research when well done. Gives knowledge to help target research.
- Census is a good example that has been very valuable for program design
- In order to explain something you must describe it first
3)Explanation
- Clients want to know why. Peel away layers to find out why.
-Need exploration ad description to go down the why path
- Crime rose steadily since 1960s. Gloom and doom forewarned in 1990s. People wanted to
know causes of crime. In the mid-1990s, sudden, sharp reversal in crime rate. New
description was needed to describe what was happening. What was causing change in trend?
- Some suggested Roe v. Wade. Not the case. Debunked.
- Causality tied to explanation.
Causality
- Causality vs. Correlation – tied to explanation
- Causation dependent on correlation, but that is only the start of the story.
- Ice cream causes shootings: temperature is the missing link
-Intervening, moderating, antecedent variables
- You might think x causes y, but it might be z
- Why do women earn less than men? Gender doesn’t cause low wages. Socialization?
Ceteris Paribus, what are the other explanations? Bargaining power. Maternity leave.
- TV causes all social ills. TV doesn’t cause anything. Moderating variables (inactivity,
overeating) cause obesity, ect.
The conditions to conclude causation
1. Covariation
2. Temporal Asymmetry (don’t happen at same time. Temporal lag is needed for
causation)
3. No plausible rival hypothesis
Activity
-Read handout
-Did the privatization of liquor stores in BC cause increased alcohol consumption?
-Covariation, Temporal Asymmetry, Rival hypothesis
- Handout – privatization of liquor stores causes increase in alcohol
consumption?
-Covariation? Yes. Temporal asymmetry? Consumption increased first. Increased
consumption caused increase in liquor stores.
-Plausible hypotheses: Population increase. Liberals coming into power in BC.
Income increase: per capita income increased before consumption increase.
Alberta privatized first. Income increase in Alberta started first as well
-Find holes in peoples theories. Discerning causality is difficult
Causality
-Very difficult to discern causality
-Does this concern you?
- Yes, Public Administrators should be concerned with causality in policy
-Not all research designs can explain
- Not all designs can determine causality. Randomized Controlled Trials may not be the
only way.
-Not all researchers understand what it means to truly explain something
Unit of Analysis
-What or whom you are studying
- May be individuals, groups, families, children, organizations, processes, programs, policies,
regulations, tools, literature, speeches, ect. No Limit to what your units can be
-Be sure to define correct unit of analysis
- Many 598s don’t define unit of analysis correctly
-Marriages or married couples
-Crimes or criminals
-Corporations or managers or shareholders
-Do not want to make assertions based about one unit based on analysis of another
-Ecological Fallacy - make inferences about individuals based on groups the belong to
- Lawrence Summers (economist and President of Harvard): on diversifying the science
and engineering workforce. Pattern of fewer women in tenured position due to ability,
not discrimination, according to Lawrence. More men in top end of distribution spread,
so more men worthy of tenured positions
-Ecological fallacy: Taking results that applied to high school students and applying to all
women
Time
- Time-related options in research design
- Collect data at one period of time or at several time periods
- Make observations at one point in time or over a period of time
-Cross-sectional studies: observation at one point in time
-Longitudinal studies: observation at several points in time
-In general, longitudinal is better data than cross-sectional, but cross-section is still good
-Time series vs. repeated measures
Boundaries
-Geographic boundaries
-Complex, Street, Neighbourhood
-Voting district, City, Province
-Country, Continent
-East of the Rockies, North of the Great Lakes
-Economic boundaries
-Income, wealth, consumption
-Poverty, LICO
-LDCs
-Governance boundaries
-Parliamentary system, dictatorship
-Other
- Individual with phones, cars, divers licenses, over 15, FTEs
- Questions we can answer may depend on resources we have available
Sampling
-What or whom within your unit of analysis will you study?
-How will you obtain your observations?
-Sampling: Process of selecting observations
-Sample: A small part or quantity intended to show what the whole is like (As though
quantitative and generalizable – Dr. Tedds) …; a specimen taken for testing or analysis. (Oxford
Dictionary Online)
Why do we sample?
- Cost of studying whole population
- Tedious to study whole population (move down)
-Cost of obtaining elements
-Size of the population
-Convenience and accessibility of elements
-Basically two sampling strategies available:
Nonprobability
- Willie Nillie , but not really (Dr. Tedds). No logic of probability in selection. Readily
available selection (convenience).
-Judgmental- choosing unit based on knowledge of subject (deviance/success).
-Snowball – pick up units as you roll down a hill. One unit helps you find another
-Quota- choose units based on representativeness as a proportion of population, but
no other selection method
-Theoretical – sample a few, then find some more.
-Volunteer – self-selecting.
- difficult to tell if samples are representative
Probability- simple random, systematic random, stratified, cluster.
-Each member has a probability of being selected into the sample. You must
know the entire population
Simple Random Samples
- Phonebook method: open the book and choose randomly. Difficult to do, especially with
computers
- A simple random sample procedure is one in which every possible unit of observation in the
population is equally likely to be chosen
Systematic Sampling
-Decide on sample size: n
-Divide frame of N individuals into groups of j individuals: j=N/n
-Randomly select one individual from the 1st group
-Select every jth individual thereafter
Stratified Sampling
Overview of stratified sampling:
-Divide population into two or more subgroups (called strata) according to some common
characteristic (e.g. gender, race)
-A simple random sample is selected from each subgroup
-Samples from subgroups are combined into one
Cluster Sampling
- sampling from heterogeneity, multiple groups
-Population is divided into several “clusters,” each representative of the population
-A simple random sample of clusters is selected
-Generally, all items in the selected clusters are examined
-An alternative is to chose items from selected clusters using another probability
sampling technique
Sampling Considerations
-What constraints are you operating under?
-Time, money, etc.
-Can/Should you exploit a priori information? Judgemental
-Do you know/have access to the population?
-Does the sample need to be representative? Why?
-Do you results need to be generalizable? Representitiveness
-Do you need to determine cause and effect?
Summary
-Design is essential to providing an answer to your question
-Consider carefully the purpose of the research
-Bar for explanatory research- causality
-Clearly ID your unit of analysis
-Keeping in mind the research question
-Establish any boundaries
-Keeping in mind resource constraints
-Decide on your sampling method
-Keeping in mind the purpose of your research
-Quality over quantity
-Taking proper samples is easier said than done
Part 2
Aim of lecture
-Review positivism, post-positivism, hermeneutics, and post-structuralism
- Mostly talk about hermeneutics and positivism
-Introduce you to three research designs:
-Case study
-Comparative
-Historical
-Illustrate that these research designs can be used with different epistemologies
Review of philosophy of research
-Three research designs:
-Case study
-Comparative
-Historical
Case study design
- Meaning and interpretation change over time
- Discrete cases. Separate from each other
- There is always something unique about the case you choose
-Single or multiple cases
-Assumption of uniqueness
-Theoretical sampling
- Not based on total population and representative. Random sample often not meaningful
In case study design
-Large amount of data
-Thick description
-Complex relationships between variables
-Building understanding; establishing meaning
-Can improve how people frame & solve problems arising in the same factual context
-Problems: findings argued not to be generalizable. Difficulties establishing causation
Case study design & generalization
- If you want to generalize beyond your case, some say you need random sample.
Key figure in public administration (Barzelay) argues case studies can produce empirical
generalizations:
-Scenarios – possible connections
-Problem-solution pairs – exemplary solutions to re-occurring problems
- Bureaucrats have to deal with uncertain information. Tests can’t always show
what people a feeling (x-rays can’t show pain)
-Role frames – tacit norms and appreciations that lead to particular problem definitions
& actions
- Way that we understand a particular problem. Legislation is never completely
clear about roles. In case studies, must consider relationships being repeated in
other situations
-Methodological – ways of doing public administration case studies
- Thick descriptions draw a mental picture of a situations. Example: Courts seen
as making fairer than bureaucrats ‘ decisions
Comparative method
1.
2.
3.
4.
Introduction
Why compare?
Comparison for explanation – cause and effect (positivism)
Comparison as a mode of exploration - (hermaneutics)
Comparative method
-Selection of two or more jurisdictions in order to understand the nature of, and reasons for,
similarities and differences
-Method used to examine these jurisdictions
-Comparative research is used in many disciplines and for many purposes
- Comparison to experimental method. Approximates experimental conditions
-Two main approaches:
-Some analysts use comparison to approximate experimental conditions
-Comparative research can also be used to facilitate the exploration of differences and
extreme cases
-Comparative research is important in the context of globalization
Why compare?
“What do they of England know who only England know?” Rudyard Kipling
-
Example: understand the connection between form of gov’t and events
-
learn from other jurisdictions that have had successes and failures
-To determine what (if anything) makes our own system distinct
-To appreciate the similarities between jurisdictions
-To be aware of policy alternatives
-To understand the consequences of different political arrangements
-To determine the impact of global trends on specific jurisdictions and the challenges for global
governance
- are countries converging in terms of gov’t expenditure
- NAFTA, EU, UN bring countries together under single governance arrangement. Would
help to understand their current gov’t arrangement
Comparison for explanation
-A similar logic to experimental designs (Burnham et al 2004).
-Assumption: there are common laws of causation across jurisdictions
-Universalist, linear, quantitative
Example: the ‘logic of industrialization’ thesis (Wilensky 1975) - broad causal relationship
between level of economic development and allocation for social expenditures
- As countries become more industrialized, they must develop a welfare state. People
don’t have subsistence land or support communities, so some form of continuous
income is necessary.
- States can be lined up from most to least industrialized and their welfare states will
correlate. This analysis does not included factors outside the above. Argument wasn’t
intended to be causal.
- Not all countries followed the same industrialization pattern
Sampling in comparative for explanation
-Most similar sampling: jurisdictions as similar as possible on ‘intervening’ variables, differ on
independent variable
-Dependent variables (Y)– the phenomena that the research wants to explain
-Independent variables (X)– the things we think may influence the dependent variable
-‘intervening’ variables (X) – everything else
- If you can match these (units with same culture, gender, ect.) you can control
for them so that they won’t confound the analysis
-Most different sampling: jurisdictions differ as widely as possible on ‘intervening’ variables,
independent variable is shared
-History, culture, everything different but the independent variable (ex. Voting systems)
-Dependent variables (Y)– the phenomena that the research wants to explain
-Independent variables (X)– the things we think may influence the dependent variable
-‘intervening’ variables (X) – everything else
Challenges
-Sample selection: who and how many?
-
no clear method for selection
-Sample size: small-N problems and over-determination
-Not always clear what sample size is appropriate. You don’t want more variables than units in
sample
-Galton’s problem: cases are not independent
- Will former colonies have a western system of government? Using only former
English colonies may present a false relationship
-selecting cases, not using random sampling
- at what point do we go beyond description?
-focus on differences may exaggerate those differences
-Do attitudes/concepts/institutions travel? (travelling problem)
- You may not be measuring the same thing from jurisdiction to jurisdiction
-Data inconsistencies
-Data-driven analysis: reflects the political priorities of official data production
-Parsimony vs. richness in analysis
Sampling issues:
- Problem of selecting cases on the dependent variable
- The problem of controlling for ‘intervening’ variables
Comparison as a mode of exploration
-Anomalous or critical cases, puzzles and paradoxes (Castles 1989)
- why has thee US failed to produce universal healthcare
-Focus on qualitative differences
-Case selection guided by theoretical problem
-Generating specific hypotheses for testing
- what are the intentions of testers (politicians, ect.)
Comparative – summary
-
cannot replicate experimental conditions completely. Can’t always control fro
everything
-
comparative designs must learn to deal with the effects of globalization
-
cross-fertilization of ideas, policy transfer makes it difficult to evaluate countries
seperately
-Comparative research never replicates experimental conditions completely
-Not all comparative research assumes universal causal patterns or seeks to discover linear
causation
-Comparative research offers a rich source of testable hypotheses
-Increasingly, comparative designs have to grapple with the impact of cross-national and global
pressures
Lecture summary
Research design must be considered together with underlying epistemology.
Epistemology guides the specifics of the research design in each case.
Positivist case studies, comparative studies and historical studies all seek to establish
generalizations and see facts as neutral.
Hermeneutic case studies, comparative studies and historical studies seek to establish an
empathic picture of events and beliefs.
- less interested in establishing causal relationships
Next time : Workshop with Laurie Waye from the Writing Centre
Download