content analysis of tropical conservation education programs

advertisement
CONTENT ANALYSIS OF TROPICAL
CONSERVATION EDUCATION PROGRAMS:
ELEMENTS OF SUCCESS
Contents
Method
Results
Discussion
ACKNOWLEDGMENTS
TABLE 1. Description of Attributes for Content Analysis
TABLE 2. Actual Number of Articles Meeting Success Criteria for Each
Attribute Category
TABLE 3. Results of Chi-Square Analysis
REFERENCES
Content Analysis
Data Collection and Analysis
Success Rates Reported
Attributes Correlating With Program Success
ABSTRACT: Evidence of success is needed to justify the use of
educational approaches as a tool to achieve conservation goals. A content
analysis of 56 reports on tropical conservation education programs
published between 1975 and 1990 revealed that fewer than half of the
programs were successful in achieving their objectives. The use of either
formative or long-term evaluations in the program design was correlated
with significantly higher rates of program success. Program longevity was
also associated with program success, suggesting a need for long-term
data collection in assessing the value of conservation education programs.
Other program attributes, such as location, sponsorship, and form of
publication used for information dissemination, were not correlated with
success.
International interest in sustaining natural tropical ecosystems has
increased over past decades as the global consequences of tropical forest
loss have become more apparent (Pearl, 1989; Wilson, 1989).
Nonsustainable human land-use patterns have been cited as the primary
cause of tropical habitat degradation (IUCN/UNEP/WWF, 1980, 1991).
Industrialized nations have targeted large capital investments to equatorial
regions in an effort to improve human interactions with the tropical
environment (Feder & Noronha, 1987). The 1992 Earth Summit held in Rio
de Janeiro further encouraged developed nations to direct large sums of
money to tropical resource conservation projects (United Nations
Conference on Environment and Development, 1993). How and where
funds earmarked for tropical conservation projects are distributed depends
on how funding agency decision makers and the public perceive a
program's anticipated success in meeting conservation goals.
Many approaches to altering land-use patterns exist, and debate continues
over which methods yield the best results, offer long-lasting solutions, and
cost less for investors. Conservation education has been recommended by
many education researchers, instructors, and administrators as a laborintensive but cost-effective means of effecting behavioral change (e.g.,
Dietz & Nagata, 1986; Jacobson, 1987a, 1991). Supporters of education
as a critical conservation tool have asserted that financial or material
incentives in lieu of education measures typically require an expensive
initial investment and provide only short-term solutions unless subsequent
costly incentive packages are provided (Western et al., 1989). McNeely
(1988) suggested that, in the short term, economic incentives outperform
education efforts in changing human land-use patterns. This implies that
education programs require time investments of several years to achieve
conservation goals (McNeely, 1988; McNeely & Miller, 1992). If true, such
assertions should limit the appeal of educational approaches for
conservation-oriented funding organizations, which must often
demonstrate program success at relatively short, fiscal-year intervals to
justify continued funding.
Proponents of tropical conservation education programs lack
documentation supporting the benefits of education programs (Byers,
1995). Tropical programs tend to occur in remote locations, inhibiting
outside access to direct information on program needs and achievements.
Secondary information sources, such as published reports, seldom are
published in established education or conservation publications, further
limiting program information available to potential program sponsors
(Wood, Wood, & McCrea, 1985). The literature often lacks information
essential to making informed decisions on effective funding allocation and
appropriate program improvements. Program evaluations are rare or
unreported (Jacobson, 1990). Yet program evaluation is necessary to
make informed and objective decisions about education program needs
and progress. Evaluation data can be used to determine how to improve
existing programs, to develop appropriate new strategies, and to allocate
effectively limited capital resources. It is therefore important to determine
the contribution of evaluation approaches as well as other program
elements to the success of programs.
To better understand the effectiveness of tropical conservation education
programs as a conservation tool, we performed a content analysis on
program reports published in mainstream literature between 1975 and
1990. Content analysis is a method for testing hypotheses about the
contents of reports or other literature through categorization and
quantitative analysis of statements obtained directly from the literature
(Burns-Bammel, Bammel, & Kopitsky, 1988). Our objectives in the analysis
were two: (a) to estimate rates of program success reported and (b) to
investigate potential correlates to reported program success, including
geographic location, sponsors, duration, type of publication used for
dissemination, and evaluation method (formative, summative, or long
term).
Method
Content Analysis
Content analysis is a technique for data collection, description, and
interpretation and is accomplished through the use of objective language,
categorization, and systematic surveys (Burns-Bammel et al., 1988).
Content analysis offers several advantages as a research tool over more
direct methods of data collection for an analysis of field-based education
programs. First, content analysis is time and cost-effective, because firsthand field data must be collected from many different locations to have a
sufficient sample size for analysis. Such fieldwork would require expensive
and time-consuming travel, as well as substantial periods of time spent at
each site to collect program data before, during, and after program
initiation. In contrast, a content analysis can provide the same quantity of
data in several weeks or months of library, telephone, and mail surveys. In
addition, data collector bias is reduced because the collected data are
reported (a) after program completion, (b) in the presence of an external
program evaluator, and (c) without prior knowledge of the report's use in
an analysis. Content analysis is well suited to analyses of trends, because
the number of catalogued items can span a large temporal or spatial field:
In this study we analyzed program results published during more than a
decade and occurring in tropical areas throughout the world.
Data Collection and Analysis
Reports selected for the study included all available articles describing
conservation education programs in tropical countries published between
1975 and 1990. References used in the study were located through the
Education and Research Information Clearinghouse (ERIC) database,
which includes proceedings from workshops and symposia, journal
articles, book sections, clearinghouse reports, and foreign documents. Key
words and phrases related to conservation education and tropical regions
of the world were used to locate items in the database.
Data collected from each report were recorded and categorized on the
basis of predetermined attribute descriptions to address the two objectivesestimation of program success rates and identification of program
attributes correlated to program success. In Table 1 attributes considered
in the analysis and the categories used for each attribute are described.
Objective 1: Estimation of Program Success
Two criteria were used to define program success: (a) reported
achievement of a program's goals and objectives and (b) additional
positive results reported to have occurred. All reported objectives were
compared with all listed outcomes for each program. Programs in which at
least half (50%) of the stated objectives were met were described as
having achieved their objectives. This liberal standard was used to avoid
underestimating program success. Reports were then searched for other
indications of positive outcomes regardless of whether they were stated as
objectives. Programs were considered successful if they met either of
these two criteria.
Objective 2: Factors Correlated With Program Success
Seven possible correlates to success were considered (Table 1):
geographic region in which the program took place, program sponsor,
duration of the program at the time of the report, type of publication in
which results were reported, and evaluation method (formative,
summative, and long term).
Program locations were categorized by continent. Programs taking place
on island nations were placed in the category of the nearest neighbor
continent. A regional category world was created to represent programs
designed to address a global audience. Sponsorship was divided into five
categories. If the sponsor was a single agency, it was categorized as an
international, national, or private group. Multiple-agency sponsorship
required the involvement of several agencies at different levels, such as a
coordinated effort by a national group and an international nonprofit
organization. Articles reviewed were categorized by the publication type in
which they were located-book, proceedings from a conference or
workshop, or journal. Program duration was identified as shorter than 3
years, 3 years or longer, or unreported if no information was given.
Three types of program evaluation were identified for reports. Evaluation
occurring while a program was underway was defined as formative
evaluation (Passineau, 1975). Immediate postprogram evaluation was
termed summative evaluation. Evaluation reported to have occurred at
least 6 months after program completion was defined as follow-up or longterm evaluation. Analyses of potential correlates to program success were
conducted using chi-square statistics (Zar, 1984).
Results
The search generated 56 published conservation education reports
suitable for content analysis (Table 2). Half of all program reports were
located in books, and the other half were divided equally between
published proceedings and journals. The largest percentage of programs
reported, 38%, occurred in Africa; 23% occurred in Asia, 21% occurred in
Latin America, 13% occurred in Australia, and 5% addressed a global
audience.
Sponsorship of education programs was balanced between international
(25%), national (29%), and multiple-agency (23%) projects. Nine
programs, or 16%, were privately sponsored, and four reports did not
mention a sponsor. Sponsorship of programs was unequally distributed
among regions. For example, more than half of all Asian programs (53%)
listed single-agency, national-level sponsors, whereas more Latin
American programs (42%) reported multiple-agency sponsors. No Latin
American case studies listed a private sponsor, while more than half of all
Australian programs (43%) listed private sponsors.
About half of all reports (54%) indicated program longevity. Two thirds
(66%) of the programs reporting program duration had been in progress for
more than 3 years. Programs in Latin America reported the highest rate of
programs lasting 3 years or more (88%). All multiple agency-sponsored
programs reporting had been in existence for at least 3 years, whereas
other sponsoring groups reported a greater proportion (33% or more) of
short-term programs.
Success Rates Reported
Fewer than half of all programs (45%) reported achievement of at least half
of their specified objectives; 68% of these also cited unanticipated positive
outcomes resulting from the program. Six additional programs also
recorded unanticipated positive results. Programs having achieved
objectives also reported significantly higher rates of additional positive
outcomes than programs in which objectives were not achieved, Chi2(1,N
= 56) = 11.60, p < .01.
Attributes Correlating With Program Success
No significant differences were found when we compared program
location, publication type, program sponsor, and program longevity with
indexes of success (Table 3). However, programs with a duration of more
than 3 years had twice the rate of success of briefer programs, suggesting
a trend, Chi2(1,N = 56) = 5.39, p < .068.
The only parameters showing clear relationships with success rates were
the evaluation techniques used during the program. Some form of
evaluation was used in fewer than half of all reports (45.6%). Evidence of
formative evaluation was most common (32.1% of all reports). Formative
evaluation procedures included periodic or spot checks to determine
progress toward specified objectives through interviews with educators,
program staff, and members of the target audience (e.g., Dyasi, 1982;
Jacobson, 1987b; Mosha & Thorsell, 1984).
Summative evaluation was less commonly noted (23.1%). Summative
evaluations were based on written or oral postcourse tests to determine
changes in knowledge or attitudes. For behavioral objectives, summative
evaluation was performed by tabulating the anticipated results of actions.
For example, in one case a primary objective was to have people plant a
given number of seedlings. Part of the summative evaluation process was
to count the number of seedlings planted in comparison to stated
objectives (Earnshaw & Skilbeck, 1982).
Long-term evaluation occurred in one quarter of all program reports. An
example of long-term evaluation was return visits to the site of an
education program 6 months after its conclusion to measure longer term
behaviors resulting from the project. This method was applied to a course
targeting a change in land-use practices on rapidly eroding soils in the
Philippines (Fujisaka, 1989). Farmers were observed months later to
determine if methods taught were still being implemented.
Articles reporting the use of formative evaluation were significantly more
likely to report that the program had achieved objectives and positive
outcomes, Chi2(1,N = 56) = 6.60, p < .01, and Chi2(1,N = 56) = 8.82, p <
.003, respectively. Likewise, reports of long-term evaluation were
associated with higher rates of program success, based on the two
indexes [objectives achieved, Chi2(1,N = 56) = 15.05, p < .001; other
positive outcomes, Chi2(1,N = 56) = 9.98, p < .002]. Surprisingly, program
success was not correlated with summative evaluation procedures (Table
3). Articles documenting the use of any two forms of evaluation, however,
did report significant increases in program success.
Discussion
Evidence of success is needed to justify the use of education programs as
a tool for achieving conservation goals. Many tropical conservation
programs are funded by agencies and institutions located in temperate
industrial nations, far from targeted audiences. These institutions depend
on development "experts" for guidance and members of the general public
for donations, who in turn rely on published materials for information. For
programs occurring in areas distant from potential donors, published
reports of progress may be the primary source used to formulate opinions
regarding the status, progress, and success and failure rates of various
conservation strategies. In this content analysis we surveyed the types of
information contained in published reports of tropical conservation
education programs, the degree to which various types of success are
being reported, and possible correlates to success.
Our results show that rates of program success were low, even using
rather lenient criteria to describe program success. These low success
rates indicate that present programs are ineffective or weak mechanisms
of change for meeting conservation objectives. This finding is reflected in a
recent report to the Biodiversity Support Program (1994) funded by the
U.S. Agency for International Development. Byers (1995) concluded that
traditional environmental education has failed.
Most program reports contained no information regarding evaluation
techniques implemented before, during, or after the program. Yet many
forms of evaluation, both qualitative and quantitative, can help improve
education programs. In this study we considered three broad types of
evaluation with various advantages. Formative evaluation assists with
immediate modifications of program design and implementation but does
not describe a program's overall achievements. Summative evaluation,
which occurs after a program is completed, provides information about
program success or failure but offers little information about causes for
these results. Follow-up evaluation offers the opportunity to monitor the
long-term effects of program strategies. Although the use of summative
evaluation had no significant effect on program success, the use of either
formative or long-term evaluation increased program success dramatically.
It appears from this study that conducting formative evaluations while a
program is underway and follow-up evaluations to monitor program impact
maximizes a program's success. Yet studies indicate that long-term
evaluation is the least likely form of evaluation to be conducted, because of
the time and resources required to implement it (Jacobson, 1987a).
Many of the evaluation techniques used in cases we reviewed for this
study consisted of qualitative, periodic interviews with teachers, students,
or other project participants. The informal nature of evaluations conducted
in these successful programs implies that time-, labor-, and capitalintensive techniques are not essential to obtain useful information for
program improvements, although systematic program assessment may
well improve programs further.
Program length correlated positively with program success. It was not
clear, however, whether duration itself or the potential for multiple
assessment opportunities was the cause for this relationship. In fact, our
data support the latter because 75% (n = 21) of the programs in operation
for 3 or more years described the use of follow-up evaluation techniques.
Continued support of education programs despite initial low success rates
is defensible if time is a critical factor in obtaining long lasting results
(Jacobson, 1995).
This analysis reveals that published reports describing tropical
conservation education programs between 1975 and 1990 show low
success rates at achieving objectives and positive outcomes and
correspondingly low rates of program evaluation. The lack of program
evaluation remains a problem. For example, of 14 ongoing conservation
education programs in Africa documented by the Biodiversity Support
Program in 1994, only 3 were implementing evaluation or monitoring
approaches.
The inclusion of formative and follow-up assessment procedures in
existing and future tropical conservation education programs, and a more
complete description of programs in published reports, should reveal the
advantages of conservation education strategies more clearly to the public
and to sponsors of conservation programs. Advocates of educational
approaches to conservation claim that these methods provide less costly
and more lasting results than incentive-based programs. Results of
program evaluations can provide the hard data to support such claims.
ACKNOWLEDGMENTS
We thank M. McDuff and J. Norris for insightful reviews of this manuscript.
This article is Florida Agricultural Experiment Station Journal Series No. R05270.
TABLE 1. Description of Attributes for Content
Analysis
Attribute
criteria
Categories
Content analysis
for
categorization
Program objectives
stated
achieved
reported
Yes
At least 50% of
objectives were
to have been
successfully
achieved.
No
Less than 50% of
stated
objectives were
reported
to have been
achieved.
Other positive
products, not
outcomes occurred
specific
Yes
Positive
included as
objectives, were
reported
to have
occurred.
No
No additional,
unanticipated
positive
outcomes were
reported
to have
occurred.
Publication type
obtained from an
Book
Articles
edited book that
was not
written solely
as the
result of a
conference.
Journal
Article obtained
from a
peer-reviewed
journal
(e.g., The
Journal of
Environmental
Education).
Proceedings
Article obtained
from
conference
proceedings.
Sponsorship
more than
Multiagency
Sponsors include
one agency type
(e.g., NGO
and government
agency).
International
Primary sponsor
is an
international
organization
(e.g., World
Bank, UNEP).
National
Primary sponsor
is a
national-level
agency
(e.g., Kenya
Parks and
Wildlife
Department).
Private
Primary sponsor
is a
private donor
(e.g., private
school or
individual
contributor).
Not reported
No information
about
sponsor is
included.
Region
in tropical
Africa
Project occurs
Africa,
including
Madagascar.
Asia
Project occurs
in tropical
India or east of
India.
Australia
Project occurs
in
Australia or New
Zealand.
Latin America
in tropical
Project occurs
South or Central
America
or the
Caribbean.
World
Project oriented
to a
tropical global
audience.
Program length
for at
> 3 years
Program existed
least 3 years at
the time
the article was
written.
< 3 years
Program existed
for fewer
than 3 years at
the time
the article was
written.
Not reported
No indication of
the
program length
was
mentioned in the
article.
Summative evaluation
Yes
Article included
indications of
summative
evaluation
procedures
used in the
program.
No
Article did not
include
indications of
summative
evaluation
procedures
used in the
program.
Formative evaluation
Yes
Article included
indications of
formative
evaluation
procedures
used in the
program.
No
Article did not
include
indications of
formative
evaluation
procedures used
in the program.
Long-term evaluation
Yes
Article included
indications of
follow-up
evaluation
procedures used
in the program.
No
Article did not
include
indications of
follow-up
evaluation
procedures used
in the program.
2 or more evaluation
types
two or more
Yes
Article included
indications of
evaluation
procedures used
in the program.
No
Article did not
include
indications of
two or more
evaluation
procedures
used in the
program.
TABLE 2. Actual Number of Articles Meeting
Success Criteria for Each Attribute Category
Legend for table:
A
B
C
D
-
Objectives achieved (n = 25)
Objectives not achieved (n = 31)
Positive outcomes occurred (n = 23)
No positive outcomes occurred (n = 33)
Attribute
n
A
B
C
D
29
14
13
11
9
5
18
5
8
11
8
4
18
6
9
21
13
7
12
3
8
5
4
7
1
13
8
3
5
2
7
7
1
7
1
14
6
6
5
2
13
14
16
9
8
6
9
1
5
8
7
8
8
7
7
1
5
7
9
8
Publication type
Book
Journal
Proceedings
Region
Africa
Asia
Australia
Latin America
World
Sponsorship
Multiagency
International
National
Private
Not reported
4
1
3
0
4
21
9
26
15
3
7
6
6
19
14
4
5
7
5
21
7
49
5
21
2
28
4
19
3
30
18
38
13
12
5
26
13
10
5
28
14
42
13
12
1
30
11
11
3
31
12
44
11
15
1
29
8
14
4
30
Program length
At least 3 years
Fewer than 3 years
Not reported
Summative evaluation
Yes
No
Formative evaluation
Yes
No
Long-term evaluation
Yes
No
2 or more evaluation
types
Yes
No
TABLE 3. Results of Chi-Square Analysis
Legend for table:
A - df
B - Chi2
C - p
Objectives
Other positive
achieved
outcomes
Attribute
C
A
Publication type
<.464
Region
<.505
Sponsorship
<.449
Program length
<.068
Summative evaluation
<.608
Formative evaluation
<.003
Long-term evaluation
<.002
Two or more evaluation type
<.023
2
B
C
B
2.48
<.29
1.54
4
1.35
<.85
3.32
4
3.84
<.42
3.69
2
4.67
<.09
5.39
1
1.03
<.31
0.26
1
6.60
<.01
8.82
1
15.0
<.00
9.98
1
10.3
<.00
4.71
REFERENCES
Berroa, J. L. B., & Roth, R. E. (1990). A survey of natural resource and
national parks: Knowledge and attitudes of Dominican Republic citizens.
The Journal of Environmental Education, 21(3), 23-28.
Biodiversity Support Program. (1994). Summaries of USAID projects in
Africa with environmental education or communication components for the
analysis of behavioral motivations in integrated conservation and
development. Unpublished manuscript.
Burns-Bammel, L., Bammel, G., & Kopitsky, K. (1988). Content analysis: A
technique for measuring attitudes expressed in environmental education
literature. The Journal of Environmental Education, 19(4), 32-37.
Byers, B. (1995). Understanding and addressing the human dimensions of
conservation and natural resources management: A behavior-centered
participatory approach for practitioners (Report of the Biodiversity Support
Program). Washington, DC: Biodiversity Support Program.
Dietz, L. A., & Nagata, E. (1995). Golden-lion tamarin conservation
program: A community educational effort for forest conservation in Rio de
Janeiro State, Brazil. In S. K. Jacobson (Ed.), Conserving wildlife:
International education and communication approaches (pp. 64-86). New
York: Columbia University Press.
Dyasi, H. (1982). Changing the fishing methods in a community: An
approach to environmental education in Ghana. In M. E. McCowan & W. B.
Stapp (Eds.), Environmental Education in Action V: International case
studies in environmental education. Columbus, OH: ERIC Clearinghouse
for Science, Math, and Environmental Education.
Earnshaw, J., & Skilbeck, P. (1982). School site development. In M. E.
McCowan & W. B. Stapp (Eds.), Environmental Education in Action V:
International case studies in environmental education. Columbus, OH:
ERIC Clearinghouse for Science, Math, and Environmental Education.
Feder, G., & Noronha, R. (1987). Land rights systems and agricultural
development in sub-Saharan Africa. World Bank Research Observer, 2(2),
143-170.
Fujisaka, A. (1989). The need to build upon farmer practice and
knowledge: Reminders from selected upland conservation projects and
policies. Agroforestry Systems, 9, 141-153.
IUCN/UNEP/WWF. (1980). World conservation strategy. Living resource
conservation for sustainable development. Gland, Switzerland: Author.
IUCN/UNEP/WWF. (1991). Caring for the earth: A strategy for sustainable
living. Gland, Switzerland: Author.
Jacobson, S. K. (1987a). Conservation education programs: Evaluate and
make them better. Environmental Conservation, 14(3), 201-206.
Jacobson, S. K. (1987b). The development of a school program for
Kinabalu Park, Sabah, Malaysia. Sabah Society Journal, 7(2), 313-323.
Jacobson, S. K. (1990). A model for using a developing country's park
system for conservation education. The Journal of Environmental
Education, 22(1), 19-25.
Jacobson, S. K. (1991). Evaluation model for developing, implementing,
and assessing conservation education programs: Examples from Costa
Ricaand Belize. Environmental Management, 15(2), 143-150.
Jacobson, S. K. (Ed.). (1995). Conserving wildlife: International education
and communication approaches. New York: Columbia University Press.
McNeely, J. A. (1988). Economics and biological diversity: Developing and
using economic incentives to conserve biological resources. Gland,
Switzerland: IUCN.
McNeely, J. A., & Miller. (1992). National parks, conservation and
development: The role of protected areas in sustaining society. In
Proceedings of the World Congress on National Parks. Washington, DC:
Smithsonian Institute Press.
Mosha, G. T., & Thorsell, J. W. (1984). Synergism between biosphere
reserves and training institutions: A case study from eastern Africa. In
Conservation, science and society. Geneva, Switzerland: UNESCO-UNEP.
Passineau, J. P. (1975). Walking the "tightrope" of environmental
education and evaluation. In N. MacInnes & D. Albrecht (Eds.), What
makes education environmental. New York: Environmental Educators and
Data Couriers.
Pearl, M. C. (1989). How the developed world can promote conservation in
emerging nations. In D. Western & M. C. Pearl (Eds.), Conservation for the
twenty-first century (pp. 274-283). New York: Oxford University Press.
United Nations Conference on Environment and Development. (1993). The
Earth Summit: The United Nations Conference on Environment and
Development. Boston: Graham & Trotman/Martinus Nijhoff Press.
Western, D., Pearl, M. C., Pimm, S. L., Walker, B., Atkinson, I., &
Woodruff, D. S. (1989). An agenda for conservation action. In D. Western
& M. C. Pearl (Eds.), Conservation for the twenty-first century (pp. 304323). New York: Oxford University Press.
Wilson, E. O. (1989). Conservation: The next hundred years. In D.
Western & M. C. Pearl (Eds.), Conservation for the twenty-first century (pp.
3-7). New York: Oxford University Press.
Wood, D. W., Wood, D., & McCrea, E. (1985). International environmental
education: The myth of transferability. In J. F. Disinger & J. Odie (Eds.),
Environmental education: Progress toward a sustainable future (pp. 359365). Troy, NY: NAAEE.
Zar, J. H. (1984). Biostatistical analysis (2nd ed.). Englewood Cliffs, NJ:
Prentice-Hall.
~~~~~~~~
By KIMBERLY S. NORRIS and SUSAN K. JACOBSON
Kimberly S. Norris is with the Department of Wildlife and Fisheries
Sciences at Texas A&M University, College Station, and she is also the
academic supervisor for Seaborne Conservation Corps, Texas A&M
University at Galveston. Susan K. Jacobson is an associate professor and
director of the Program for Studies in Tropical Conservation in the
Department of Wildlife Ecology and Conservation, University of Florida,
Gainesville.
-----------------------------------------------------------------------Copyright of Journal of Environmental Education is the property of
Heldref Publications and its content may not be copied or emailed to
multiple sites or posted to a listserv without the copyright holder's express
written permission. However, users may print, download, or email articles
for individual use.
Source: Journal of Environmental Education, Fall98, Vol. 30 Issue 1, p38,
7p
Item: 1142994
Top of Page
Download