Uploaded by knicole florece

Week15 USAID Practice Exercise Identifying Threats to Validity

advertisement
Exercise 6-1.1: Identifying Threats to
Validity (Answers)
Purpose: The purpose of the exercise is to practice identifying possible threats to
validity in performance evaluation designs.
Steps:
1. Carefully read each of the following scenarios.
2. Using Reference 6: Threats to Internal and External Validity diagnose
potential threats to internal validity in each of the evaluation scenarios.
3. Discuss these threats with your group.
4. What could be done differently to mitigate or avoid these threats?
Scenario 1:
As part of a final evaluation of a phased five-year sports-for-development project, at-risk
youth will be interviewed about the effect of the training on their self-perception and
tendency to engage in risky behaviors. The evaluation methodology calls for the
evaluation team to interview direct beneficiaries from each of the five cohorts to assess
the effectiveness and appropriateness of the training. In order to triangulate findings, the
evaluators will analyze panel data from a standardized self-perception and behavioral
test filled out by each participant at the beginning and end of the 8 week training.






History effect: any number of things could have happened during or after the project
that could influence the outcomes of interest (macroeconomic changes,
police/government approach to at-risk youth, behavioral change communication,
other projects, etc.).
Maturation effect: youth self-perceptions and behaviors are very fickle. Both of these
outcomes can be expected to change over time regardless of the intervention. This
is particularly the case for those beneficiaries in the early cohorts.
Repeated Testing effect: the self-perception and behavioral test is standardized, so
respondents may learn how to best respond to the questions.
Mortality effect: with such a transient population, it may be difficult to identify and
contact responents. This is particularly the case for those beneficiaries in the early
cohorts.
Recall bias: this is particularly the case for those beneficiaries in the early cohorts.
Effects of testing: the pre-test might condition the performance of the training
participants
Scenario 2:
In a final evaluation of a legislative strengthening project, the evaluation team is tasked
with assessing the extent to which the intervention helped elected legislators become
more effective advocates for their constituents, and how lessons learned can help
improve implementation of a potential extension. The methodology calls for the use of
PRA techniques with sub-groups of constituent communities as well as analyzing the
results of citizen score card. Constituents have been filling out these score cards
annually to rate their legislator's performance.




History effect: any number of things could have happened during or after the project
that could influence the outcomes of interest (new laws, economic growth, other
projects, etc.).
Maturation effect: perceptions and expectations of constituents could change
Mortality effect: depending on the electoral cycle, legislators could be voted out of
office
Effects of testing: rating legislators might increase the knowledge and standards of
constituents, making cross-year comparisons problematic.
Scenario 3
In a mid-term evaluation of a phased water and sanitation project, the evaluation team
will assess progress against the work plan and compare outcomes between
communities as they gain access to piped water. The evaluators will collect data on
water usage as well as health and income outcomes using mini-surveys and semistructured interviews. The evaluation methodology calls for comparing closely-lying
communities that have received the intervention with eligible communities that have not
yet received access to clean water.


History effect: any number of things could have happened during or after the project
that could influence the outcomes of interest (macro-economic changes, weather
patterns, other projects, etc.).
Selection Bias: just because the communities are proximate does not mean that they
are good comparisons for one another. They could vary across a large number of
variables.
Scenario 4
In a mid-term evaluation of an economic growth project that supports small-businesses
with grants and financial management training, the evaluators will interview the CEOs of
a random sample of recipient businesses. The evaluation will strive to answer if benefits
reached all sectors of the target population (women and minority-owned businesses) as
well as comparing the relative effects of grants with those of the financial management
training. In order to be considered for inclusion, interested businesses had to fill out a
short one-page form. The project was widely advertised in radio, print and television.





History effect: any number of things could have happened during or after the project
that could influence the outcomes of interest (macroeconomic changes, regulatory
changes, additional training, other projects, etc.).
Maturation effect: as businesses grew from start-ups to established firms, a culture
change could have happened within the organization.
Mortality effect: Many small businesses could have gone out of business. Evaluating
the surviving businesses introduces Selection Bias as it excludes failed ventures.
Response bias: if respondents feel that additional benefits might be forthcoming,
they will be likely to provide positively biased answers.
Multiple treatment interference: if some businesses receive both the grant and
training, these treatment effects might interact making isolating the effect of either
one difficult.
Scenario 5
In a final evaluation of a natural resource management project aimed at stemming soil
erosion, the evaluation team will use data from a GIS system that tracks pastoral
farmers and their flocks to determine changes in grazing patterns over the course of the
five-year project. Selected farmers were given portable GPS locators to carry with them
during the dry season. Over the course of the five-year project, some equipment was
replaced and some upgraded. The evaluators will conduct follow-on interviews with the
farmers to triangulate findings.





History effect: any number of things could have happened during or after the project
that could influence the outcomes of interest (weather patterns, macroeconomic
changes, other projects, etc.).
Maturation effect: cultural or economic changes could change the number of
pastoralists and the location of grazing grounds.
Selection bias: how were farmers selected? Are they representative of the whole?
Response bias: if respondents feel that additional benefits might be forthcoming,
they will be likely to provide positively biased answers.
Instrumentation effect: the change in GPS locators can influence the findings of
grazing patterns.
Download