Informing and Shaping Public Opinion - AEFP Draft

advertisement
INFORMING OR SHAPING PUBLIC OPINION?: THE INFLUENCE OF SCHOOL
ACCOUNTABILITY DATA FORMAT ON PUBLIC PERCEPTIONS OF SCHOOL
QUALITY
Paper Prepared for The Association for Education Finance and Policy – 37th Annual
Conference: Concurrent Session VIII - Saturday, March 17: 11:30AM – 1 PM
8.09 - Impacts of Accountability and Choice
DRAFT: FOR COMMENT ONLY. PLEASE DO NOT CIRCULATE OR CITE WITHOUT
THE AUTHORS’ PERMISSION
Rebecca Jacobsen (Corresponding Author)
Assistant Professor
Michigan State University
116G Erickson Hall
College of Education
Michigan State University
East Lansing, MI 48824
(ph) 517-353-1993; (fax) 517-432-2795
e-mail: rjacobs@msu.edu
Andrew Saultz
Doctoral Student
Educational Policy Program
Michigan State University
e-mail: saultzan@msu.edu
Jeffrey W. Snyder
Doctoral Student
Educational Policy Program
Michigan State University
e-mail: snyde117@msu.edu
2
Acknowledgements
The authors would like to thank the Time-Sharing Experiments for the Social Sciences (TESS) and
the principal investigators of TESS, Jeremy Freese, Northwestern University and Penny Visser,
University of Chicago, their feedback on the survey design and for supporting the data collection for
this project. The work for this paper was supported also in part by the Education Policy Center,
Michigan State University. Any opinions, findings, conclusions or recommendations expressed in
this publication are those of the authors alone.
Abstract (Word Count=143)
The 2001 No Child Left Behind act requires local education agencies to publicly disseminate data on
school performance. In response, districts and state departments of education have created unique
“school report cards” that vary widely. Policy discussions often assume that data are neutral,
ignoring the possibility that people perceive differences between data formats. Using a populationbased survey experiment, this research investigates the link between school accountability data
format and public satisfaction with school performance. Mimicking the variation seen in publicly
available data formats, respondents were randomly assigned to one of the four format conditions to
examine whether and how format influences public perception. Our findings suggest that data
format does significantly influence perceptions of school performance. Because our findings refute
the notion of data neutrality, we conclude by considering the policy feedback effect data format
policy decision may be having on education politics.
3
Introduction
The expansion of accountability policies in education has led to a dramatic proliferation of
school performance data. According to the 2001 No Child Left Behind (NCLB) act, states and local
education agencies are required to publicly disseminate these data. Because our education system is a
public institution, the public has a right to know well the system is performing. After all, the public
spends over $500 billion annually (U.S. Department of Education, 2010), so it is not surprising that
the people would want to be informed with data regarding how effectively this public money has
been spent. To provide the public with data as mandated by NCLB, state departments of education
and local districts have created school report cards. However, NCLB did not specify the format of
these data, allowing each state to develop its own unique version of how the data should be
presented in these school report cards. For example, Michigan provides an overall grade - A through
F - for each school (Michigan Dept. of Education, 2012), while Georgia reports a performance
index score for each school (Georgia Dept. of Education, 2012). While both systems ostensibly
report similar information – the relative achievement of students within the school - it is not clear
whether presenting data in different formats differentially influences public perceptions of school
quality. At a time when many educational policymakers are stressing the need to make more data
available for the public (e.g. Duncan, 2010) and simultaneously turning to the public for increased
support to implement new reforms, understanding whether the format of data significantly
influences perceptions of school quality is key understanding the likelihood of support for new
policies. In this paper we examine the following question: How does the format of educational data
alter public perception of school performance?
Background
Underlying Theory of Action for Publicly Available Data
4
As a democratic institution, the average citizen makes important decisions about whether,
and to what degree, to support the public education system (McDonnell, 2005; McDonnell, 2008).
Public programs must communicate effectively with the people to ensure ongoing support and to
strengthen community ties (Gormley & Weimer, 1999; Mintrom, 2001). In short, the people want to
know how their institutions are performing to decide whether they are sufficiently satisfied with the
performance to continue supporting the institution (Lyons & Lowery, 1986; Glaser & Hildreth,
1999; Simonsen & Robbins, 2003).
However, the public often lacks the necessary information to hold their schools accountable.
The public is notoriously unaware of many policy issues (Delli Carpini & Keeter, 1989), but this lack
of knowledge in education may be due to the fact that many citizens have no direct interaction with
schools. Even parents may have little more than informal interactions with a small handful of
teachers upon which to judge the quality of the school or even the whole system. This asymmetry of
information hinders the ability of the people to apply pressure and voice demands for change
(McDonnell, 2004) ultimately leading to potentially worse educational outcomes as schools and their
faculty face little pressure from the people to improve (Peterson & West, 2003).
Believing that dissemination of accountability data can reduce this asymmetry of information
and empower the people to make more informed assessments of school performance (Feuer, 2008;
McDonnell, 2008), NCLB requires that that performance data be made “widely available through
public means, such as posting on the Internet, distribution to the media, and distribution through
public agencies” (NCLB, 2002). NCLB and similar public policies are built upon an assumed
underlying theory of action that hopes the people will “act as a catalyst, actually triggering the causal
process” behind improved school performance (McDonnell, 2004, pg. 34).
To prompt people to act as this catalyst, the publication of school performance report cards
has become widespread. Performance report cards, which are not unique to education, are a “regular
5
effort by an organization to collect data on two or more other organizations, transform the data into
information relevant to assessing performance, and transmit the information to some audience
external to the organizations themselves” (Gormley & Weimer, 1993, pg. 3).
But advocates of publicly available data often ignore the subjective and interpretative nature
of numbers (Moynihan, 2008; Radin, 2006; Stone, 2002). Assuming that any data will cause all people
to act, little research has examined whether the type of data plays an important role in shaping public
opinion regarding school quality. As Stone (2002) notes, “In the ideal market, information is perfect,
meaning it is accurate, complete and available to everyone at no cost. IN the polis, by contrast,
information is interpretive, incomplete and strategically withheld” (pg. 28). As states and districts
have implemented the NCLB requirement to publicly disseminate school performance data,
different policy decisions have results in a wide variety of report cards with regards to both the
amount and type of data provided to the public. Does this variation have a significant influence on
public perceptions?
Informing or Shaping Perceptions of School Quality: Potential Policy Feedback Effects
Understanding the way that public data shape perceptions is important because there is the
potential that these data are reshaping the political will of the electorate toward public education.
Education research has focused primarily on data use in schools. We have just over a decade of
research now on how districts, schools and their faculty use data to shape their actions (e.g. Blanc,
Christman, Liu, Mitchell, Travers & Bulkley, 2010; Booher-Jennings, 2005; Coburn, Honig & Stein,
2009; Diamond & Cooper, 2007; Heilig & Darling-Hammond, 2008; Thorn, Meyer, & Gamoran,
2007). But this narrow focus on the direct relationship between results based reforms and
educational improvement has meant that questions regarding the influence of performance data on
broader opinions of and support for public education have largely been ignored.
6
Drawing upon policy feedback theory, which posits “that policies enacted and implemented
at one point in time shape subsequent political dynamics so that politics is both an input into the
policy process and an output” (McDonnell, 2009, p. 417), we suggest that such a narrow focus in the
research community may result in blind-spot for an equally important outcome for current policy
initiatives; school accountability data may be fundamentally reshaping how the citizenry view its
public schools. Publicly available performance data is intended to shape public perceptions, but we
know very little about how opinion is being shaped and whether it is being shaped in ways that
continue to engage the public in the education system. Policy choices regarding the exact type of
data may have differential effects resulting in potentially negative declines in support for public
education simply because some data formats are misinterpreted. We suggest that policy choices
regarding the formatting of school performance data likely structure public perceptions of school
quality in important ways that are currently being ignored by policy makers who are rushing to
publicize increasing amounts of education accountability data and researchers focused exclusively on
the way school personnel use performance data. The potentially differential effects that data may
have on the public, however, may have implications for future education politics and the possibilities
for education reform (Mettler, 2002; Soss & Schram, 2007).
Data
Sample
The data used in this study are from an experimental population-based survey (Mutz, 2011)
fielded by Knowledge Networks whose probability based KnowledgePanel is the only nationally
representative online panel recruited via both random digit dialing and address based sampling.
Population-based survey experiments enable researchers to test theories “on samples that are
representative of the populations to which they are said to apply” thereby providing stronger
external validity (Mutz, 2011, pg. 3). Given the public nature of education, both as a consumer of
7
public funds and an input into civic life, polling the larger public is critical to understanding how
opinions that shape future education politics are developed.
1,833 panelists were randomly drawn from the Knowledge Networks panel and 1,111
responded to the invitation. This represents a final stage completion rate of 60.6 percent. The
recruitment rate for this study, reported by Knowledge Networks was 15.2 percent, for a cumulative
response rate of 9.2 percent. The final sample is representative of the larger U.S. population and
table 1 provides demographic information for the sample as a whole relative to the U.S. Current
Population Survey from December, 2011.
[Insert Table 1 About Here]
Survey Instrument
A review of 59 school report cards - each state, Washington D. C., and the eight largest cities
- yielded four common data formats: 1) performance index ratings, 2) letter grades, 3) performance
rankings, and 4) percent of students meeting a goal. While all four are intended to convey the same
information – the relative performance of the school’s student population – the presentation of the
data vary.
Condition 1: Performance index. Some states provide the public with a performance index
rating for each school. Two notable examples include California and Ohio. These scores are often
decontextualized and can take on a variety of ranges (e.g. California issues an API score somewhere
between 200 and 1000 whereas Ohio’s falls between zero and 120). We presented respondents in
this condition group with a performance index between zero and 200.
Condition 2: Letter grade. Many states provide the public with school letter grade. Similar
to how students are graded, schools receive A-F letter grades for their performance. Florida and
Michigan are two places where the public receives information in this format.
8
Condition 3: Percent meeting goals. By far the most common data format is a reporting
of the percent of students meeting a specified goal. States may display the goal differently – North
Carolina uses percentage at or above grade level while Wisconsin uses the percentage scoring at each
level of its state test – but the overall format is similar. Respondents in this condition were shown a
percentage between zero and one hundred.
Condition 4: Achievement level. Several states label schools using achievement levels to
signal their performance. For example, Ohio labels each school with one of six designations ranging
from “Academic Emergency” to “Excellent with Distinction.” We utilize the achievement levels
adopted by National Assessment of Educational Progress (NAEP), which includes four
designations: below basic, basic, proficient, and advanced and we added a fifth category, failing
because of the increasing use of this label for schools.
Equating across the conditions. To equate the formats across conditions, existing state
report cards were used as models. Several states combine two or more of the above formats, thus
making it possible to relate some of the formats. Because each state constructs its own measures of
success and they vary widely, we have constructed an equating method based on what best
represents existing report cards data.
[Insert Table 2 About Here]
Letter Grades to Achievement Levels. The relationship between letter grades and
achievement levels is straightforward. The five traditionally used letter grades – A, B, C, D and F –
map neatly onto five achievement level ratings (See columns 1 and 2 in Table 3).
Letter Grades/Achievement Level to Percent Meeting Goal. The way that states equate
either of the above formats to a percent of students meeting a given goal is not consistent. We relied
upon our assessment of what was most common across the states to equate these formats. We
recognize that the ranges listed in Table 3 are not uniform, but this reflects what commonly used
9
across the states to ensure greater ecological validity in our study. Typically, the highest level of
achievement (often called “advanced”) is given to a smaller segment of schools. Therefore, this
distribution our best approximation of what is commonly reported to the public. Finally, the midpoint of the range was selected to represent the exact data point to be included in the survey (See
column 3 of Table 3).
Percent Meeting Goal to Performance Index Ratings. The final column, the
Performance Index, is the format of data that varies most widely from state to state (See column 4
of Table 3). Because no two states are alike, selecting what is typical is impossible. Therefore, we
chose to construct an artificial scale of 200 based on the hypothesis that the public may convert
these numbers into a more familiar scale – like a percent out of 100 scale. To examine this
possibility, we doubled the value we assigned in the percent meeting the goal format. While in reality
this would be an incorrect interpretation of these numbers because no actual school report card uses
their index in such a manner, this design choice enabled us to examine this potential.
Distribution of School Scores: To examine whether the variation in format influences
perception of school quality, respondents were randomly assigned to one of the four format
conditions and shown school performance data for three schools. Performance data were provided
for three areas in which schools are commonly expected to develop students’ knowledge, skills and
behaviors: Academics, the Arts, and Citizenship (Rothstein & Jacobsen, 2009). The data assigned to
each school are distributed symmetrically with “C” being the average score for Academics. School 1
- “Strong Performance” school – was assigned an academic score one unit above the average and School 3 - “Weak Performance” schools - was assigned an academic score one unit below the
average (See Table 3).
[Insert Table 3 About Here]
10
Assessing Satisfaction with School Performance: Dependent Variable. After viewing a
school’s data, respondents were asked to evaluate school performance using a seven-point rating
scale. Utilizing a modified version of the American Customer Satisfaction Index (ACSI), which is
widely cited in the business and media literature (Fornell et al., 1996; see also
http://www.theacsi.org/), respondents express their satisfaction with 1) the overall performance, 2)
whether the school meets their expectations, and 3) how close the school is to their ideal school.
(See Appendix A for a sample of the survey instrument, including the exact question wording for
this section.) This trio of questions has been found to have strong internal and retest reliability and
the strongest construct validity when compared to five other single and multi-item satisfaction
measures (Van Ryzin 2004a). ACSI measures have long been used in studying consumer behavior.
More recently, public administration scholars have used these questions to asses citizen satisfaction
with public services (Van Ryzin, 2004a; Van Ryzin, Muzzio, Immerwahr, Gulick, & Martinez 2004).
For each school in each condition, internal reliability (Cronbach’s alpha) for the set of three
satisfaction items was 0.9 or higher. This high level of internal consistency allowed us to average the
three questions into a single outcome.
Empirical Strategy
Prior to beginning our analysis, we investigated whether sample demographics were spread
evenly across conditions. To do this, we ran oneway ANOVAs with demographic variables as our
response variable and condition as a factor variable for the following demographics: race, age,
education, income, gender, marital status, employment status, metropolitan statistical area (MSA)
status, geographical region, presence of school aged child, political party, and ideology. In no
instance did condition have a significant effect, so we can assume that these demographics were not
unevenly distributed among conditions.
11
Because our study benefitted from an experimental design, we used ANOVA estimates to
initially examine if the condition assigned to respondents influenced satisfaction. Average
satisfaction scores for each school (strong, middle, weak) were first compiled into a single variable.
We then ran oneway ANOVA models for each school and condition. We then examined
distinctions between specific conditions utilizing T-tests to explore the exact differences between
conditions.
Results
Descriptive Results
As can be seen in table 4, regardless of condition, respondents reported higher levels of
satisfaction for the strong school and lower levels of satisfaction for the weak. Thus, respondents
were able to differentiate between schools regardless of the specific condition to which they were
assigned. While the strong school consistently received higher satisfaction ratings, it received the
highest rating from those who viewed the letter grade data format (5.16) and the lowest rating from
those who viewed the performance index data format (4.32). Similarly, respondents seem to hold
different levels of satisfaction for the weak school based upon their assigned condition. Respondents
who viewed the percent proficient expressed the highest level of satisfaction with the weak school
(2.20) while respondents viewing the letter grade format expressed the lowest level of satisfaction
(1.85). While it appears that format influenced the perceptions of school quality for the strong
school and weak school, the average satisfaction for the average school is nearly uniform across
condition. There is only small of variation between the conditions for the average performing school
(just 0.13 points). This suggests that the format of the data may play a significant role in shaping
perceptions of school quality when schools are either excelling or struggling (See table 4).
In addition to impacting the average satisfaction rating across conditions, it would appear
that the format of school performance data impacts the spread respondents perceived between the
12
schools. For example, when compared to the other conditions, respondents who viewed the letter
grades expressed both higher levels of satisfaction with the strong school and lower levels of
satisfaction with the weak school, resulting in a larger spread in satisfaction across the schools. This
suggests that those viewing letter grades, a familiar format, perceive greater variation in school
performance as shown by the spread in strong and weak school satisfaction. Conversely,
respondents viewing the performance index format perceived the schools as more similar in
performance, thus narrowing of the range of satisfaction scores between the schools. Using the
means reported in Table 4, the spread for letter grades was 3.32, seemingly much larger than the
other conditions. The spreads for performance index, percent meeting goal, and achievement level
conditions were 2.18, 2.72, and 2.80, respectively. It appears that format influences both average
satisfaction levels as well as respondents perceptions regarding the degree of variation between
schools.
Statistical Results
Because the average satisfaction ratings by condition discussed above suggest that format
may be significantly influencing perceptions of school quality, we further examined these data using
statistical analysis. Table 5 reports results from ANOVA estimates. Using average satisfaction with
each school as response and condition as factor, the ANOVA results demonstrate that condition
significantly influences reported satisfaction levels for both the strong and weak schools. This
suggests that the data format selected by different states can have a significant impact on public
perception of school quality.
These findings gave us reason to explore individual conditions through unpaired T-tests to
learn more about where differences occur. Table 6 shows results from T-tests comparing
combinations of conditions within school groupings.
13
Strong School. Format had the most significant impact on the perceived quality of the
strong school. For those who viewed data in the letter grade format, their satisfaction was
significantly higher than each of the other conditions. Additionally, those who viewed the
performance index rating had significantly lower levels of satisfaction with the strong school than
each of the other conditions. For the strong school, only the percent meeting the goal and the
achievement level formats resulted in no significant difference in perceived satisfaction with the
performance.
Middle School. Unlike the strong and weak schools, we were unable to find significant
differences between reported satisfaction levels and the assigned condition ANOVA results
provided no significant evidence that differences in satisfaction occurred among conditions. T-test
results further confirmed this finding and provide no evidence that respondents were any more or
less satisfied with our average school as condition varied.
Weak School. Reported satisfaction levels were significantly different for the weak school
for some of the conditions. Similar to the strong school, respondents who viewed data in the letter
grade format expressed significantly lower satisfaction levels when compared to the three other
condition formats. All other conditions had no significant differences in average satisfaction.
Discussion
Producing school performance data is a popular reform strategy filled with promises
of higher levels of achievement. While a significant body of research has developed to document the
ways in which schools, districts and teachers are using these data to improve educational outcomes,
limited attention has been paid to the ways the public is being influenced by these data. A small, but
growing body of research demonstrates that the people are paying attention to these data (e.g. Figlio
& Lucas, 2004; Figlio & Kenny, 2009; Charbonneau & Van Ryzin, 2011). But thus far, data are often
assumed to be neutral (Moynihan, 2008; Radin, 2006; Stone, 2002). Yet our results demonstrate that
14
public perception of school performance, particularly for our strong and weak school, can be either
raised or lowered depending upon on the format in which data are presented. In short, not all data
are created (and communicated) equally. Therefore, this research contributes to our small, but
growing understanding of the way that school performance data are impacting public perceptions of
school quality. Because these perceptions are critical to maintaining support for and involvement
with public education, understanding whether and how data shape perceptions is key to maintaining
healthy support for the education system.
We find that format can significantly impact average satisfaction levels with school
performance. Some formats result in higher reported satisfaction levels while some formats seem to
depress satisfaction levels. Thus, our survey results indicate that format of data can be a driver of
average satisfaction. Moreover, we demonstrate that format can significantly influence the variation
that is perceived between schools. The letter grade format in particular caused people perceive
greater differences between the schools, while those who viewed the performance index format
perceived that all three schools were performing more similarly.
Blunting or enhancing the ability to perceive differences between schools may be especially
important when respondents assess the overall health of the education system. If the public views
the majority of it’s schools as performing at roughly the same level, the demands for improvement
and reform may be very different than if the public perceives only some schools to be struggling
significantly. Further, the ability of the people to see pockets of excellence among it’s schools may
be critical to ensuring ongoing faith in the education system. Therefore, the perceived policy
problem and solution for these different scenarios, which have been shaped by the data, may be
significant for garnering public support for reform agenda put forth by the education community.
Thus rather than being neutral, data and their format can be a powerful tool to shape public opinion.
15
While our findings question the neutrality with which data dissemination decisions occur,
like all studies, some limitations should be considered when drawing conclusions from our results.
First, some may questions whether the public even pays attention to these data. Current policies and
the underlying theory “assumes that the availability and quality of performance data is not just a
necessary condition for use, but also a sufficient one” (Moynihan, 2008, pg. 5). It is not hard to
imagine that in the “age of information,” school performance data are short lived in the minds of
most citizens. Admittedly, we do have mixed, but limited, research regarding public use of school
performance data (McDonnell, 2004; Pride, 2002; Hausman and Goldrin, 2000; Holme, 2002;
Weelden and Weinstein, 2007). In some of the most recent work, however, evidence has been
growing that the public is using and responding to these data (Charbonneau and VanRyzin, 2012;
Figlio and Loeb, 2011; Jacobsen and Saultz, in press). And in places such as New York City,
extensive media reporting when report cards are released suggest that public is likely aware of these
data. Therefore, while additional work is needed in this area, we believe that the public is being
influenced by the growing availability of school performance data thus making our experimental
survey applicable to real-world data use.
Additionally, an unavoidable problem emerges when trying to equivalent data across the
multiple formats. Agreed upon equivalency does not exist in practice. Thus, we have relied upon our
assessment of existing practices and our best judgment to equate conditions. While we believe our
judgments were sound, there is a wide range of data available and in some states and districts, the
equivalency we presented respondents would not be representative. Thus, these findings may not
represent all current data systems in place. However, they do broadly represent the majority of
report cards currently being publicized.
Ideally, this study would encompass all data formats used by states to report on school
quality. However, limitations on sample size necessitated that we focus only on the most commonly
16
found data features. Graphical representations (bar graphs, pie charts and line graphs) were not
included, but several states and districts do provide such images to accompany their data. Moreover,
several states and districts provide the public with a school or district comparison in conjunction
with school performance data. Whether and how this additional information shapes interpretation
remains unknown. Future research should continue investigate whether expanding the information
presented to the public supports or contrasts our findings.
Conclusion
Because public education is a democratically run institution, the public makes important
decisions about whether and to what degree to support its public education system. Significant
public investment in the education system makes public dissemination of data not only an intuitive
policy strategy, but also a key component for effective democratic control. Citizens need
information in order to accurately assess the quality of public services in which they are heavily
investing and hold their representatives responsible for the performance of these institions.
Often, education leaders focus on the embarrassment power of publicly available data; as
Mintrom summarizes, “information concerning how well schools are performing relative to others
offers parents and other interested citizens vital knowledge. Armed with this knowledge, it is much
easier for citizens to ask pointed questions about school performance” (Mindtrom, 2001). But in
addition to providing the people with data so that they can point out weaknesses in the system,
publicizing data can have a positive impact on perceptions as well. Accurate and timely information
can build trust and confidence amongst the people who are more likely to support their institutions
if they are able to see what they are getting for their investment (Hartey, 2006). Consequently,
performance data can actually boost public confidence and satisfaction for schools, possibly
reversing the downward trend in education over the past 40 years (Jacobsen, 2009; Loveless, 1997).
17
But such positive or negative effects are not simply an artifact of the data themselves. They
are also shaped by policy choices regarding the dissemination of data. Such policy feedbacks
structure the shape of public opinion in ways that can both foster and constrain future political
support for the education system. While creating and disseminating school performance data may be
necessary to enhance participation in education governance, policy makers should pay increased
attention to not just if data are distributed but also how data is presented. Decisions about how to
present data hold serious implications for how data are interpreted and can shape how policy
feedback loops change political environments.
Finally, we must pay careful attention to the way in which the variations in format are
purposely used to sway public opinion. Stone reminds us that, “Measures in the polis are not only
strategically selected but strategically presented as well” (pg. 185) and as we have demonstrated here,
the potential to present schools in a particular light, simply by altering the presentation of the data
exist. While most conversations regarding education performance data focus nearly exclusively on
the promises they hold for improvement, this work reminds us that performance data can also be
used to threaten the system and hinder improvement. It would be wise for us then, to proceed
cautiously as we increasingly pursue policies that publicize school performance data.
18
Works Cited
Blanc, S., Christman, J. B., Liu, R., Mitchell, C., Travers, E., & Bulkley, K. E. (2010). Learning to
learn from data: Benchmarks and instructional communities. Peabody Journal of Education,
85(2), 205–225.
Booher-Jennings, J. (2005). Below the bubble: “Educational triage” and the Texas accountability
system. American Educational Research Journal, 42, 231–268.
Charbonneau, É., & Van Ryzin, G. G. (2011). Performance measures and parental satisfaction with
New York City schools. The American Review of Public Administration.
doi:10.1177/0275074010397333
Coburn, C. E., Honig, M. I., & Stein, M. K. (2009a). What’s the evidence on district’s use of
evidence? In J. Bransford, D. J. Stipek, N. J. Vye, L. Gomez, & D. Lam (Eds.) Educational
improvement: What makes it happen and why? (pp. 67–86). Cambridge, MA: Harvard Educational
Press.
Coburn, C.E. & Turner, E.O. (2011). Research on data use: A framework and analysis,
measurement. Interdisciplinary Research and Perspective, 9(4), 173-206.
Delli Carpini, M. and Keeter, S. (1989) What Americans Know About Politics and Why It Matters. (New
Haven, CT: Yale University Press).
Diamond, J. B., & Cooper, K. (2007). The uses of testing data in urban elementary schools: Some
lessons from Chicago. In P. A. Moss (Ed.), Evidence and decision making (pp. 241–263).
Malden, MA: National Society for the Study of Education.
Duncan, A. (August 25, 2010). Secretary Arne Duncan’s remarks at the Statehouse Convention
Center in Little Rock, Arkansas. Retrieved from U.S. Department of Education website
http://www.ed.gov/news/speeches/secretary-arne-duncans-remarks-statehouseconvention-center-little-rock-arkansas.
19
Feuer, M. (2008). “Future Directions for Educational Accountability: Notes for a Political Economy
of Measurement.” in The Future of Test-Based Educational Accountability, eds. Ryan, K. &
Shepard, L. (New York: Routledge), 293-307
Figlio, D. N., & Loeb, S. (2011). School accountability. In E. A. Hanushek, S. J. Machin & L.
Woessmann (Eds.), Handbooks in Economics: Economics of Education (Vol. 3, pp. 383-421).
North-Holland, The Netherlands: Elsevier.
Fornell, C., Johnson, M.D., Anderson, E.W., Cha, J., & Bryant, B.E. (1996). The American
Customer Satisfaction Index: Nature, purpose and findings. Journal of Marketing. 60, 7-18.
Georgia Department of Education (2012). http://www.doe.k12.ga.us/
Gormley, W.T., Jr. & Weimer, D.L (1999). Organizational Report Cards, (Cambridge, MA: Harvard
University Press).
Glaser, M.A., & Hildreth, B. (1999). “Service Delivery Satisfaction and Willingness to Pay Taxes:
Citizen Recognition of Local Government Performance,” Public Productivity & Management
Review 23(1): 48-67.
Hartey, H. (2006). Performance Measurement: Getting Results, 2nd Edition. Washington, D.C.: Urban
Institute Press.
Heilig, J. V., & Darling-Hammond, L. (2008). Accountability Texas-style: The progress and learning
of urban minority students in a high-stakes testing context. Educational Evaluation and Policy
Analysis, 30(2), 75–110.
Jacobsen. R. (2009). “The Voice of the People in Education Policy,” in Handbook of Education Policy
Research, ed. Sykes, G., Schneider, B., and Planks, D. New York, NY: Routledge Press.
Loveless, T. (1997). “The Structure of Public Confidence in Education,” American Journal of Education
105(2): 127-159.
Lyons, W.E., & Lowery, D. (1986). “The Organization and Political Space and Citizen Responses to
20
Dissatisfaction in Urban Communities: An Integrative Model,” The Journal of Politics 48(2):
321-346.
McDonnell, L. M. (2004). Politics, persuasion, and educational testing. Cambridge: MA: Harvard
University Press.
McDonnell, L.M. (2005). “Assessment and Accountability from the Policymaker’s Perspective,”
Yearbook of the National Society for the Study of Education 104(2): 35-54.
McDonnell, L.M. (2008). “The Politics of Educational Accountability: Can the Clock be Turned
Back?” in The Future of Test-Based Educational Accountability, Eds. Ryan, K. & Shepard, L. (New
York: Routledge), 47-67
McDonnell, L. M. (2009). Repositioning politics in education's circle of knowledge. Educational
Researcher, 38(6), 417-427.
Mettler, S. (2002). Bringing the state back in to civic engagement: Policy feedback effects of GI bill
for WWII veterans. The American Political Science Review, 96 (2), pp. 351-365.
Michigan Department of Education (2012).
https://oeaa.state.mi.us/ayp/school_one_only_1_2004.asp?ECDid=2020&Grade=12
Mintrom, M. (2001). Educational governance and democratic practice. Educational Policy, 15: 615642.
Moynihan, D. (2008). The Dynamics of Performance Management: Constructing Information and Reform
(Washington, DC: Georgetown University Press).
Mutz, D. (2011). Population-based survey experiments. Princeton, NJ: Princeton University Press.
No Child Left Behind Act of 2001, Pub. L. No. 107-110, 115 Stat. 1460 (2002).
Peterson, P. and West, M. (2003). No Child Left Behind? The Politics and Practice of School Accountability
(Washington, D.C.: Brookings Institution Press).
21
Plank, D.N., & Boyd, W.L. (1994). Antipolitics, education, and institutional choice: The flight from
democracy. American Educational Research Journal, 31(2), 263-281.
Radin, B. (2006). Challenging the performance movement: Accountability complexity and democratic values.
Washington, D.C.: Georgetown University Press.
Rothstein, R. and Jacobsen, R. (2009, May). “From Accreditation to Accountability.” Phi Delta
Kappan.
Schattschneider, E. E. (1975). The Semisovereign People. United States of America: Wadsworth
Thomson Learning.
Simonsen, B., & Robbins, M.D., “Reasonableness, Satisfaction, and Willingness to Pay Property
Taxes,” Urban Affairs Review 38(6): 831-854.
Soss, J. & Schram, S.F. (2007). A public transformed? Welfare reform as policy feedback. The
American Political Science Review, 101 (1), pp. 111-127.
Stone, D. (2002). Policy paradox: The art of political decision making. New York: W.W. Norton & Co.
Thorn, C., Meyer, R. H., & Gamoran, A. (2007). Evidence and decision making. In P. A. Moss (Ed.),
Evidence and decision making (pp. 340–361). Malden, MA: National Society for the Study of
Education.
U.S. Department of Education, National Center for Education Statistics. Digest of Education Statistics,
2010. (Washington, D.C.: NCES, 2011), Table 188.
Van Ryzin, G.G. (2004a). “The measurement of overall citizen satisfaction.” Public Performance &
Management. 27(3), 9-28.
Van Ryzin, G. G. (2004b). Expectations, performance, and citizen satisfaction with urban services.
Journal of Policy Analysis and Management, 23(3): 433-448.
Van Ryzin, G.G., Muzzio, D., Immerwahr, S., Gulick, L., & Martinez, E. (2004). Drivers and
consequences of citizen satisfaction: An application of the American customer satisfaction
22
index model to New York City. Public Administration Review, 64(3), 331-341.
Wichowsky, A., & Moynihan, D. P. (2008). Measuring how administration shapes citizenship: A
policy feedback perspective on performance management. Public Administration Review, 68(5),
908-920.
23
Table 1. Demographics and CPS Estimates
Adult U.S.
Population
(December
2011 CPS)
Gender
Male
47.81%
Female
52.19%
Age
Race
Educational
Attainment
18-24
25-34
35-44
45-54
55-64
65 or over
White, NonHispanic
Black, NonHispanic
Hispanic
Other, NonHispanic
2+ Races,
Non-Hispanic
Less than
High School
Diploma
High School
Diploma or
Equivalent
Some College
Total
Sample
Performanc
e Index
Letter Grade
Percentage
Meeting
Goal
Achievemen
t Level
49.95%
50.05%
49.65%
50.35%
50.19%
49.81%
51.04%
48.96%
48.89%
51.11%
11.36%
16.84%
16.78%
18.95%
17.25%
18.83%
10.71%
14.31%
16.20%
19.62%
20.43%
18.73%
13.38%
14.08%
17.25%
17.25%
17.96%
20.07%
8.18%
17.10%
13.01%
22.30%
18.96%
20.44%
11.46%
13.89%
15.63%
19.44%
22.57%
13.57%
9.63%
12.22%
18.89%
19.63%
22.22%
17.41%
82.40%
71.38%
72.54%
71.75%
70.49%
70.74%
9.93%
9.63%
10.21%
11.29%
12.33%
10.21%
8.55%
13.75%
10.07%
14.24%
9.63%
11.11%
6.20%
3.51%
3.87%
13.75%
2.78%
4.44%
1.46%
3.15%
3.17%
2.97%
2.43%
4.07%
11.92%
10.62%
13.03%
10.78%
9.38%
9.26%
37.17%
23.79%
30.21%
30.56%
30.74%
29.63%
30.64%
32.04%
30.28%
28.69%
28.08%
28.17%
24
Household
Income
Bachelor's
Degree or
Higher
28.75%
29.25%
28.52%
Under $10,000
6.49%
5.58%
3.87%
16.36%
12.15%
14.44%
26.07%
23.13%
25.35%
19.79%
18.99%
20.07%
31.29%
40.14%
36.27%
72.53%
27.47%
65.44%
34.56%
78.73%
21.27%
83.08%
16.92%
$10,000$24,999
$25,000$49,999
$50,000$74,999
$75000 or
more
Children
Under 18
MSA Status
No
Yes
Metro
Non-Metro
28.25%
17.83%
30.37%
4.09%
6.25%
8.15%
11.15%
11.11%
11.85%
27.51%
22.57%
17.04%
17.47%
18.40%
20.00%
39.78%
41.67%
42.96%
66.20%
33.80%
65.43%
34.57%
63.19%
36.80%
67.04%
32.96%
82.39%
17.61%
84.01%
15.99%
83.68%
16.32%
82.22%
17.78%
25
Table 2. Data Format Equating.
Letter
Grades
Achievement
Levels
A
B
C
D
F
Advanced
Proficient
Basic
Below Basic
Failing
Percent Meeting
Standard
(Typical Range and
Mid Point)
90% and Above
95%
75% to 89%
82%
50% to 74%
62%
25% to 49%
37%
Below 25%
12%
Performance Index
Out of 200
(Double the %
meeting standard)
190
164
124
74
24
26
Table 3. Distribution of School Scores
Condition 1
Condition 2
Letter Grades
Acad Art
STRONG
School 1
AVERAGE
School 2
WEAK
School 3
Condition 3
Condition 4
Percent
Proficient
Achievement
Ratings
Performance
Index Ratings
Citiz Acad Art
Citiz Acad Art
Citiz Acad
Art
Citiz
B
A
A
164
190
190
82
95
95
Prof
Adv
Adv
C
B
B
124
164
164
62
82
82
Basic
Prof
Prof
D
C
C
74
124
124
37
62
62
Below
Basic Basic
Basic
27
Table 4. Average Satisfaction Ratings by Condition
Letter
Performance
Percent
Grades
Index
Proficient
Strong School
5.16
4.32
4.92
Average School
3.21
3.24
3.34
Weak School
1.85
2.14
2.20
Achievement
Level
4.88
3.32
2.08
28
Table 5. ANOVA results
Sum of Squares
Mean Squares
FRatio
17.90
0.61
4.45
Satisfaction w/Strong
105.6946
35.2315
**
Satisfaction w/Middle
3.2967
1.0989
Satisfaction w/Weak
19.4938
6.4979
*
* = p < .05
** = p < .01
Note: Satisfaction variables are the response variable for each model. The factor
variable for each model is condition with df=3.
29
Table 5: T-Test Results
School
Comparison
Strong
PI vs. LG
PI vs. PM
PI vs. AL
LG vs. PM
LG vs. AL
PM vs. AL
N1
278
278
278
265
265
286
Mean1
4.31595
4.31595
4.31595
5.16226
5.16226
4.92249
SD1
1.48052
1.48052
1.48052
1.29396
1.29396
1.40409
N2
265
286
267
286
267
267
Mean2
5.16226
4.92249
4.88202
4.92249
4.88202
4.88202
SD2
1.29396
1.40409
1.42331
1.40409
1.42331
1.42331
SE
0.11956
0.12147
0.12449
0.11530
0.11796
0.12028
T-Stat
-7.07865
-4.99327
-4.54732
2.07955
2.37565
0.33648
**
**
**
*
*
Average
PI vs. LG
PI vs. PM
PI vs. AL
LG vs. PM
LG vs. AL
PM vs. AL
278
278
278
265
265
287
3.23501
3.23501
3.23501
3.21384
3.21384
3.34030
1.31544
1.31544
1.31544
1.23789
1.23789
1.45208
265
287
267
287
267
267
3.21384
3.34030
3.32459
3.34030
3.32459
3.32459
1.23789
1.45208
1.33807
1.45208
1.33807
1.33807
0.10974
0.11668
0.11367
0.11531
0.11178
0.11889
0.19297
-0.90239
-0.78808
-1.09674
-0.99082
0.13212
Weak
PI vs. LG
PI vs. PM
PI vs. AL
LG vs. PM
LG vs. AL
PM vs. AL
279
279
279
265
265
287
2.13501
2.13501
2.13501
1.84528
1.84528
2.20035
1.14309
1.14309
1.14309
1.12440
1.12440
1.29168
265
287
266
287
266
266
1.84528
2.20035
2.08459
2.20035
2.08459
2.08459
1.12440
1.29168
1.25953
1.29168
1.25953
1.25953
0.09727
0.10263
0.10295
0.10345
0.10363
0.10863
2.97843 **
-0.63668
0.48976
-3.43233 **
-2.30918 *
1.06568
* = p < .05
** = p < .01
PI = Performance Index Condition
LG = Letter Grade Condition
PM = Percent Meeting Goal Condition
AL = Achievement Level Condition
30
Appendix A: Sample Survey Excerpt
Introduction to School Data
Schools today are required to provide the public with annual information on their performance. Just
like students receive report cards to evaluate their performance in each subject area, schools are
evaluated in different subject areas and that information is provided in a school report card. These
report cards are then made publicly available through the Internet, which enables the public to judge
how well schools in their area are doing to meet their educational goals.
Imagine you are asked evaluate your satisfaction with a school’s performance based on its report
card data. On the following screens, you will be provided with school report card data for three high
schools. After examining the report cards, you will be asked judge each school’s performance.
CONDITION B: Letter Grades
[Programming Note: Order of Schools Should Be Randomized]
School 1.
Below are report card data for Oak High School. The performance of the students at Oak High
School has been measured and the resulting letter grades have been earned for each area. Letter
grades include A, B, C, D and F. Considering the provided data, please answer the accompanying
questions.
Oak High School
Educational Goal
Letter Grade
Academics
B
Arts
A
Citizenship and Community Responsibility
A
Question 2.
Satisfaction means many things. Overall, how SATISFIED are you with Oak High School based on
these data?
Radio buttons 1-7, 1=very dissatisfied; 7=very satisfied
Question 3.
Considering all of your EXPECTATIONS for the performance of a high school in your state, to
what extent has the performance of Oak High School fallen short of your expectations or exceeded
your expectations?
Radio buttons 1-7, 1= fallen short of my expectations; 7=exceeded my expectations
Question 4.
Imagine the IDEAL high school for you and your household. How well do you think Oak High
School compares with your ideal?
Radio buttons 1-7, 1=very far from my ideal; 7=very close to my ideal
31
School 2.
Below are report card data for Elm High School. The performance of the students at Elm High
School has been measured and the resulting letter grades have been earned for each area. Letter
grades include A, B, C, D and F. Considering the provided data, please answer the accompanying
questions.
Elm High School
Educational Goal
Letter Grade
Academics
C
Arts
B
Citizenship and Community Responsibility
B
Question 5.
Satisfaction means many things. Overall, how SATISFIED are you with Elm High School based on
these data?
Radio buttons 1-7, 1=very dissatisfied; 7=very satisfied
Question 6.
Considering all of your EXPECTATIONS for the performance of a high school in your state, to
what extent has the performance of Elm High School fallen short of your expectations or exceeded
your expectations?
Radio buttons 1-7, 1= fallen short of my expectations; 7=exceeded my expectations
Question 7.
Imagine the IDEAL high school for you and your household. How well do you think Elm High
School compares with your ideal?
Radio buttons 1-7, 1=very far from my ideal; 7=very close to my ideal
School 3.
Below are report card data for Cedar High School. The performance of the students at Cedar High
School has been measured and the resulting letter grades have been earned for each area. Letter
grades include A, B, C, D and F. Considering the provided data, please answer the accompanying
questions.
Cedar High School
Educational Goal
Letter Grade
Academics
D
Arts
C
Citizenship and Community Responsibility
C
Question 8.
Satisfaction means many things. Overall, how SATISFIED are you with Cedar High School based
on these data?
Radio buttons 1-7, 1=very dissatisfied; 7=very satisfied
Question 9.
32
Considering all of your EXPECTATIONS for the performance of a high school in your state, to
what extent has the performance of Cedar High School fallen short of your expectations or
exceeded your expectations?
Radio buttons 1-7, 1= fallen short of my expectations; 7=exceeded my expectations
Question 10.
Imagine the IDEAL high school for you and your household. How well do you think Cedar High
School compares with your ideal?
Radio buttons 1-7, 1=very far from my ideal; 7=very close to my ideal
Download