1 CJE3444 Crime Prevention

advertisement

1

CJE3444

Crime Prevention

Chapter 3

Evaluation and Crime Prevention

Dr. Elizabeth C. Buchholz

Types of Evaluation

 Evaluation of crime prevention refers to investigating the impact of a prevention technique or intervention on the level of subsequent crime, fear, or other intended outcome.

 Investigating the impact

 Impact Evaluation

 Focus on what changes occur after the introduction of the policy, intervention, or program.

 For instance: Do neighborhood watch problems reduce fear and actual crime rates once introduced?

 Do police patrols reduce drug sales in an area?

Impact (outcome) Evaluation cont’d

 In crime prevention, pose interesting problems

 Crime prevention initiatives rarely rely on a single intervention or approach

 The target of the initiatives is a neighborhood or other geographic area

 Neighborhoods cannot be isolated

 Many interventions are not uniformly applied across an area or adopted by all residents

 Competing issues of crime displacement and diffusion of benefits

Impact (outcome) Evaluation

Problems:

The target of the initiatives (unit of analysis for evaluation) is a neighborhood or other geographic area

Neighborhoods cannot be isolated

Many interventions are not uniformly applied across an area or adopted by all residents

Issues of crime displacement and diffusion of benefits

Impact Evaluation

 Problems (cont’d):

Crime prevention initiatives rarely rely on a single intervention approach.

Programs use a “menu” of different activities at the same time

Watch scheme

Property identification

Neighborhood cleanup

Periodic meeting

Newsletter

Obstacles with Impact Evaluation

 Which of the program initiatives are most effective?

What is the unit of analysis?

 Is crime displaced (moved to another area) or is it prevented?

09.03.13

Process Evaluation

Considers the implementation of a program or initiative and involves determining the procedures used to implement a specific program.

More of a qualitative approach to evaluation.

 Offer a detailed descriptive account of the program and its implementation

 Information is pivotal in answering questions about the context of an intervention and what actually took place in the initiative

 Unfortunately, many evaluations look only at the process

Process Evaluation

 Mission/goals of the program

 Level and quality of the program staff

 Funding and other resources of the program

 Obstacles faced in implementing and sustaining the initiative

 Degree to which the project was carried out as planned

 Level of support for the program

 Degree to which the clients complied with the operation

Problems with Process Evaluation

 The impact of the program is often difficult to comprehend

 Success is often determined by “false” results:

 Number of meetings

Number of members involved

 Length of program

Financial support

 Are not true measures of success

Cost Benefit Evaluations

 Seeks to assess whether the costs of an intervention are justified by the benefits or outcomes that accrue from it.

 A cost benefit analysis should involve an impact and process evaluation to determine the program’s worth.

 You cannot determine if costs are justified if you do not measure whether or not the program is able to bring about the expected change.

Problems with Cost-Benefit Evaluations

 Largest problem involves setting monetary values on factors that are not easily enumerated

(fear, trauma, emotional loss, etc.)

 Another problem is making certain that all of the costs involved in the program (and related to the program operations) are counted

Theories and Measurement

 Programs are often created in a theoretical vacuum

Those implementing and evaluating the intervention do not pay attention to the theoretical assumptions underlying the prevention program.

 Evaluations undertaken in a theoretical vacuum may still provide answers regarding whether the program had the intended impact, just not why

 Basic questions on the assumed causal mechanism are often ignored

 Residents and planners want results but often fail to ask the questions of why or how:

2

09.03.13

3

Why would an educational program reduce aggressive behavior?

 This results in “Basic” evaluations of yes it was a success or no it was not

 These evaluations fail to tell us “why”

Theories and Measurement

 Programs that are not evaluated:

 Fail to tell us why the program is or is not successful

 Can provide only limited insight to whether the program can be implemented to other places or at other times

 Many investigations might not be necessary if the underlying theory for the intervention was examined

• Implementing curfew laws when most crimes involving youths occur during the afternoon

Why the resistance?

1.

“ Outcome Myopia ”

 Programs and the evaluators are only interested in whether the program works and not how or why it works.

 Does not take into account the possibility that other factors are at work

 Does not tell why a program does not work

2.

Many program administrators assume that a program works

 They believe it’s only “common sense” that it works

 Blind belief in programs

 Have the ear of politicians who can provide legislative and funding

3.

Many programs are the results of grassroots efforts

 Not interested in evaluations, as long as they’re happy with it

Theories and Evaluation

 Evaluation of crime prevention strategies is key

Most programs fail to have adequate evaluation strategies

Examples

Juvenile curfew laws

Weed and Seed programs

Measurement Problems

 Measuring key outcome variables when the intervention is geographically based

Is area being measured the same as the area the police intervention was implemented?

Many crime prevention programs are based on neighborhoods or other small geographic areas that do not coincide with specific police reporting areas

 Conundrum in crime prevention whereas programs often try to simultaneously reduce the level of crime while increasing the reporting of crime to the police

 Possible solutions

Using victim survey data

THEORY AND MEASUREMENT

 Measurement Issues cont’d

Victim data is not always available, and the collection of that data can be both timeconsuming and costly

Operationalizing key variables such as fear is not a straightforward enterprise

09.03.13

4

Finding ways to uncover the competing influences in the project that mask the outcomes is difficult

THEORY AND MEASUREMENT

Follow-Up Periods

How long after the implementation of the program or intervention will changes in crime (or other outcome) appear?

Is there a possibility that over time any initial changes will diminish or disappear?

There is no rule on the appropriate follow-up time

Evaluation should look to the underlying theory for guidance

Ideal would be one where follow-up data is gathered at different intervals

METHOD FOR EVALUATION

Experimental Design

The preferred approach by many evaluators due to a number of strengths:

 A true experimental design is also known as a randomized control trial o the gold standard in evaluation

 Relies on the random assignment of cases into experimental and control groups, increases the likelihood that the two groups being compared are equivalent.

 There is enough control over the evaluation to make certain that the experimental group receives the treatment or intervention, while the control group does not.

Experimental Design

 A strength of Experimental Design is controlling for internal validity

 Internal validity deals with factors that could cause the results other than the measures that were implemented

 Having a control group limits this threat to the program

Experimental Design cont’d

 The Maryland Scale of Scientific Methods (see Table 3.2) outlines the measures for evaluation

Developed for a review of literature on what works in prevention for the U.S.

Congress

How closely a study adhered to the standards of a true experimental design

Suggests that policy makers should only consider research that meets the gold standard and that research funds should only be expended when an experimental design is used

 Not many programs meet the experimental design goal

Other Threats to Evaluation

 Generalizability

 Are they applicable in other places, settings, and times?

 External Validity

 Wide range of potential problems inherent in trying to replicate the findings of any program evaluation

 Difficult, if not impossible, to randomly assign communities to experimental and control groups

 Matching cannot guarantee that the areas are comparable

 Mo way to isolate the experimental and control communities from all other influences

09.03.13

5

 Experimental Design evaluations often guide the project instead of the project being guiding by the needs of the city

 Experimental Design (cont.)

Problems (cont.)

Underlying problem is that experimental designs fail to consider the context within which a program or intervention operates

Too easy to jump to a conclusion that something does or does not work

Negative findings may be the result of factors such as poor program implementation, misspecification of the appropriate target or causal mechanism underlying the problem, or resistance by the target

Realistic Evaluation

 Rather than relying exclusively on experimental approaches, evaluation needs to observe the phenomenon in its entirety.

 Two key ideas: mechanism and context

 Mechanism: understanding “what it is about a program which makes it work”

 By what process does an intervention impact an outcome measure such as crime or fear of crime?

 Context: “the relationship between causal mechanisms and their effects is not fixed, but contingent”

Summary

Evaluation or analysis of crime prevention programs are often seen as meaningless or too cost and are not completed

A good evaluation is important for the growth and success of the program

The remaining chapters will focus on different programs/designs and evaluation of program success will be discussed for each

09.03.13

Download