Schafer+andf+Fannin+SRSA+2010

advertisement
Interdistrict Comparisons of Intradistrict School
Performance Inequality in Louisiana: 2001-2009
Lead Author: Mark J. Schafer, LSU Agricultural Center
Co-author: James M. Fannin, LSU Agricultural Center
Abstract
The No Child Left Behind (NCLB) Act was enacted in 2001 to reduce the
performance gap in educational achievement across various subgroups within the United
States. Most scholars agree that the act has led to significant systemic changes within
education systems. Certainly, some schools have shown substantial improvement under
NCLB. However, school improvement varies significantly across schools. One study
conducted in Louisiana found that one in five schools failed improve or performance
actually declined from 2001 – 2004, and that this was not only typical of the state as a
whole, but also of various regions and school districts (Schafer 2007). To understand this
issue better, this paper first develops a measure of intradistrict school performance
inequality—a school district level measure--to assess variation in school performance
across like schools within the same school district. Second, school and district level data
are used to compare intradistrict school performance inequality across Louisiana’s school
districts at the beginning of the No Child Left Behind Act in 2001 (N=66). Third, we
examine changes in intradistrict inequality over time by using the latest available data for
the 2008-2009 school year (N=69). Fourth, the paper will examine relationships between
district level characteristics and intradistrict school performance inequality. Fifth, and
finally, the paper will discuss implications for theory, research, and policy of broad-based
policies that have potentially drastically varying consequences for schools that are
differentially placed within a performance-hierarchy within their school districts.
This paper has been prepared to be presented at the Annual Meeting of the Southern
Regional Science Association, March 26, 2010, in Washington D.C.
This research was made possible through funding by a cooperative agreement with the
Minerals and Management Coastal Marine Institute. We used data made publicly
available by the Louisiana Department of Education.
I.
Introduction
This research seeks to understand the extent to which No Child Left Behind-style
accountability achieved its stated objective—to reduce the gap in educational
achievement---in one rather particular sense in a state that is heavily influenced by the Oil
& Gas Industry, Louisiana. As the title of the law implies, the objective of No Child Left
Behind is to reduce student-level achievement gaps. In practice, however, states and
school districts seek to improve students’ test scores by holding schools accountable for
their students’ academic progress. Therefore, much scholarship has focused on schoollevel achievement gaps. Within the nested hierarchy, our focus moves up an additional
level to the school district. Specifically, we are interested in developing a district-level
method that can be used for district-level comparative analysis, in which the focus is on
the extent of school-level performance variation or inequality within districts.
Louisiana is an idea state within which to conduct a cross district comparison of
within-district school performance inequality. First, its school districts are nearly
coterminous with its parishes (counties), so comparisons very likely reflect other parishlevel inequalities. Second, the southern half of the state has long been influenced by the
development, expansion, and changes that have occurred within the offshore oil and gas
industry, while the northern portion of the state has not been particularly influenced by
oil and gas. Hence, the state offers the opportunity to conduct a natural experiment of the
resource-dependent and resource-independent parish school districts. Third, racial
inequalities in educational performance can be traced back in history to segregation and
urbanization, and these inequalities persist in both urban centers and rural communities.
Racial disparities, however, may have more to do with East/West than with North/South
1
differences within the state. Fourth, Louisiana’s parish/school districts range from small,
isolated rural districts with fewer than 10 schools to large urban districts with over 100
schools, creating the possibility of comparing within-district performance inequalities
across a range of district-types. We hope the interdistrict comparisons of intradistrict
performance outcomes will help policy-makers in Louisiana and similarly situated states.
We begin this paper with a discussion of why we expect districts to vary with
respect to their internal, school-level performance disparities. From there we move on to
our approach to developing meaningful measures of intradistrict inequality. We then use
those measures to present descriptive performance inequality data for Louisiana’s school
districts in 2001 and 2009, the beginning and current point in the No Child Left Behind
era. After that we present finding from cross-sectional and panel analyses, and we
conclude with some discussions of the strengths and limitations of our work and suggest
some potentially fruitful ways to carry this work forward in the future.
II
Intradistrict Outcomes Inequality: What is it? Why does it matter?
Most studies of educational inequalities focus on (1) resources instead of
outcomes and (2) cross district instead of within-district comparisons (Iatarola and Stiefel
2003). The research devoted to understanding resource inequalities across districts was
spurred by interest in the effects of court-ordered educational finance reforms. Studies in
the 1990s showed that these reforms led to increased spending in poor districts in states
where they occurred (Murray, Evans, and Schwab 1998). The more recent focus on
within-district comparisons has tended to focus on large urban school districts with large
numbers of schools. These studies emerged in response to findings of increased equity in
2
cross-district educational resource distribution (Summers and Wolfe 1976). In Los
Chicago (Rubenstein 1998), New York City (Iatorola and Stiefel), and other large urban
school districts, scholars argued that interdistrict comparisons underscored the extent of
resource inequities within these areas.
A few studies have attempted to determine whether school-level resource
disparities were greater within or between districts. Hetert (1995) compared per-pupil
expenditures in California, Burke (1999) compared teacher-pupil ratios across large
districts, and Owens and Maiden (1999) examined instructional expenditures in Florida.
In general, these studies demonstrated that resource inequalities exist both within and
between schools.
Rubenstein (2006) provides an overview of a growing body of literature that
focuses on the extent and type of resource inequalities within a single large district. He
concludes that within district disparities are extensive and that they generally
disadvantage the neediest schools, particularly with respect to quality teacher resources.
This study complements this research on educational resources by shifting the
focus to cross district comparisons of school-level outcomes. The research on
educational outcome inequalities is less extensive. Berne (1994) analyzed outcome
disparities across school districts in New York State, and found that for “outcome
indicator after outcome indicator, New York City and the large urban school districts, the
high poverty schools, and the high minority schools performed at substantially lower
levels”.
Several scholars have approached the study of outcomes from an efficiency
perspective. In a review of the empirical findings, Belfield and Levin (2002) linked
3
outcomes to increased market competition among education providers. Other scholars
have argue that when variable outcomes are achieved by like districts with similar inputs,
the resulting measures can be seen as measures of inefficiency (Ruggiero, Miner, and
Blanchard 2002). Policy makers can utilize this information to improve the efficiency of
education delivery.
Like input studies, research on educational outcomes tends to focus on individual
student-level outcomes. Our focus is similar but slightly different. Our district-level
analysis focuses on variations across district in school-level performance. Understanding
inequities across schools within each district brings the focus to broader, parish level
factors that support more equitable or inequitable school-level outcomes. We are
particularly interested in the extent to which school-level inequities mirror broader socioeconomic inequalities within Louisiana parishes, and whether such inequalities are
exacerbated or attenuated by linkages to the off-shore oil and gas industry.
The next section discusses challenges to measuring outcome inequality, with a
particular emphasis on the state of Louisiana.
III
Measuring Intradistrict Performance Inequality in Louisiana
The state of Louisiana established a formula for each school to calculate a School
Performance Score (SPS) based on a combination of test scores, attendance, and (for
schools with grades 7 and up) dropouts. When accountability was initially established in
1998-1999 school year, Louisiana policy-makers established a 10-year bench mark for all
schools to achieve a SPS of 100 or greater. The passing of the federal No Child Left
Behind required policy-makers to add sub-group component scores, but the SPS formula
4
continued to be used and, thus, the state currently defines any school with an SPS over
100 as a “successful school”. In 2001, 223 schools (16.2%) had attained “successful
school status and than number increased to 363 schools (28.5%) by 2009. In 2003 the
state also implemented a “star” system, tied to the SPS and cost factors that ranged from
1 to 5 stars with 5 being the best schools. Schools earning only 1 star were seen as
adequate, but barely, and risked having that state take more direct action if they failed to
meet annual improvement goals. In 2009, 32% of all schools had earned only 1 star or
worse, while nearly 40% of schools earned 2 stars and another 30% had earned 3 or more
stars.
Many studies of resource inequalities in education draw upon one or more of four
measures of inequality: (1) the federal range ratio, (2) the coefficient of variation, (3) the
gini coefficient, and (4) the McCloone Index. The first three are true measures of
dispersion or inequality in a distribution while the last focuses only on the bottom half of
a distribution and, therefore, may be better understood as a measure of adequacy. The
choice of inequality measures, according to Kaplow (2005) depends on a combination of
the conceptualization of the problem and the empirical facts in particular contexts. At
this point in our exploration, we are less interested in the specific details of intradistrict
inequality, but are concerned with (1) the range of school performance scores within
districts, (2) the situation of the worst performing schools relative to the median
performance in a particular district (McCloone Index), and (3) the extent to which all low
performing schools are failing to meet federal and state required performance goals (we
use the Percentage of Low Performing Schools).
5
For the first measure, we use the interquartile range rather than the range ratio.
This is the only true measure of within-district school performance inequality. For the
second measure we use the McCloone Index. This measure is best understood as a
measure of within-district adequacy and, therefore, its interpretation is dependent upon
the median school performance in the district. For example, if the median school
performance is low then a high McCloone Index would simply mean that all schools in
the district are relatively equitably weak. The third measure is not technically a measure
of intradistrict inequality as it is a measure of the capacity of each district to provide
quality educational services to its children. The combination of these three indicators
provides a richer picture of both within and between district inequalities.
IV.
Intradistrict Inequality in 2001: The Beginning of Accountability
Table 1 presents a descriptive picture of intradistrict school performance
inequality in 2001. The indicators of performance, number of schools, inequality, and
adequacy are listed in the left column. The next column lists overall averages for the
state, and the following 4 columns compare intradistrict inequalities across MMS and
nonMMS parishes (South and North) and large and small school districts, respectively.
Several key findings from Table 1 are instructive. First, the typical school district
in the state had a mean School Performance Score of 77.1 (the state defined schools with
an SPS>100 as “successful schools”). In general oil and gas influenced (southern) and
large school districts tend to have higher mean and median school performance scores.
This finding challenges theoretical frameworks that suggest that urban and resourcedependent regions are disadvantaged. Second, the dispersion measures (IQR, S.D., CoV)
6
all show few differences between MMS and nonMMS parishes, but significant
differences between large and small districts, but also differences within these broad
categories. For example, among school districts with fewer than 25 schools, the
interquartile range ranges from 6.1 across the eight schools in Iberville Parish to 46.2
across the eight schools in Point Coupee Parish.
Third, the McCloone index shows that, within districts, the worst performing
schools tend to do worse relative to the mean when the district is large. While this index
is typically used to analyze equity in funding, here we can interpret the index is a specific
way. Within all Louisiana school districts, schools that perform below the median SPS of
76.6, have an average SPS score of 85% of the median or 65.1. If the sample is limited to
small school districts, low performing schools score on average at about 90% of the
median score of 75.5 or 68.0, while the low performing schools in large districts only
average 82% of the median score of 80.0, or about 65.6.
Fourth, the typical school district in Louisiana had about 35% of its schools
classified as “low performing” in 2001. That percentage was lower for MMS and large
districts and higher for northern and small districts. In three districts, none of which were
among the 33 MMS-defined districts, all the schools were low performing in 2001. Fifth,
the school-level enrollment, percent minority, percent free and reduced lunch, and special
education figures show broad similarities across both North and South Regions and
across large and small school districts. Louisiana is unique in this respect, because its
schools with large percentages of poor and minority schools are not isolated in large
urban districts.
7
V.
Change in Intradistrict Inequality
Table 2 presents descriptive statistics relating to the change in intradistrict
inequality and adequacy from 2001-2009. First, Mean School Performance Scores
increased by about 11.5 points overall, in both the North and the South, and in both large
and small school districts. Notwithstanding that fact that four school districts saw
declines in mean SPS during this period, the overall impression is that NCLB-style
accountability led to school improvement throughout the state. We should also note that
the overall improvement picture, and particularly that of large school districts is
somewhat distorted by the major changes in Orleans Parish School district, which went
from 116 schools in 2001 to 16 in 2009 (thus, technically becoming a small district) when
the state took control of the vast majority of the schools after Hurricane Katrina in 2005.
Second, the next two indicators in Table 2 show declines in two measures of
intradistrict school performance inequality: about a 3 ½ point decline in the interquartile
range and about a .06 point decline in the coefficient of variation. Again, these declines
generally occurred across regions and district types.
Third, the final two indicators of performance adequacy provide somewhat
competing pictures of changes in adequacy. On the one hand, the McCloone Index
increased from about .85 to .90, showing that the low performing schools in districts are
earned school performance scores in 2009 that were closer to the median, on average,
than the low performing schools in 2001. The bottom half is closer to the median than it
was before. On the other hand, the percentage of state-defined low-performing schools
(those with an SPS score of less than 70) declined only slightly overall, and did not
8
decline at all in nonMMS districts. The combination of these two findings support
findings by Schafer (2007) that overall school improvement in the state masked the
reality that about a third of all schools in the state, and within most regions and districts,
saw stagnating or declining school performance from 2001 to 2005.
VI.
Determinants of Intradistrict Inequality at beginning of Accountability
Table 3 presents an OLS regression analysis of determinants of intradistrict mean
school performance, inequality, and adequacy in 2001. The explanatory variables listed
in the left-hand column include district level measures of (1) size overall student
enrollment; (2) student characteristics---percent at risk (defined as those eligible for free
or reduced school lunch), percent minorities, and percent special education students; (3)
district resources---pupil-teacher ratios, total revenues per student, total instructional
expenditures per student, and percent of adults in the parish in poverty; and (4) district
structural characteristics of being among the MMS-defined parishes or being a large
district.
The first model estimates the mean school performance score in the district. The
results demonstrate the strong negative relationship between the percentage of minority
students in a district and the mean school performance scores.
The next two models estimate the two measures of dispersion, interquartile range
and the coefficient of variation. Interestingly, the percentage of at risk students in a
district is associated with a reduction in the interquartile range, while higher pupil teacher
ratios are linked to wider range of school performance scores. This pattern does not hold
for the coefficient of variation, which is positively associated with instructional
9
expenditures, poverty, and large school districts. To understand why greater instructional
expenditures are associated with increased performance inequality across schools, it
would be necessary to explore the allocation of these resources across the schools in the
districts. One possibility is that greater expenditures are allocated toward special status
magnet and gifted and talented schools (which are also more common in large school
districts), but this is speculation at this point.
The final two models explore the issue of adequacy. The results for the
McCloone Index reveal that, relative to the median school in their own district, the lower
performing schools fair better in districts with a higher mean SPS, more at risk students,
and more special education students. On the other hand, the lower performing schools
have a lower relative performance in larger versus smaller school districts. With respect
to satisfying state-defined criteria for school performance, the findings mirror the
findings for mean school performance. The key factor in explaining both mean school
performance and percentage of low performing schools is the percentage of minority
students in the district.
VII.
Determinants of Change in Intradistrict Inequality
Table 5 presents OLS regression results for the change in intradistrict school
performance inequality from 2001 to 2009. The first model shows that the increases in
mean school performance were largely a result of the mean school performance in 2001;
districts with high initial mean performances realized smaller increases. We included
Orleans parish as a dummy variable due to its special circumstances resulting from
Hurricane Katrina—the school district saw huge gains in its mean school performance,
10
but that is linked to the reality that it remained with only 16 of its initial 116 schools, the
rest of which were taken over by the state.
The second model shows two factors were associated with reductions in the
dispersion as measured by the interquartile range. Higher district-wide enrollments were
associated with increases (or smaller decreases) in the interquartile range from 2001 to
2009, while districts with higher instructional expenditures realized greater reductions in
the range of school performance inequality. The third model offers a slightly different
picture of changes in school-level dispersion about the mean. MMS and Orleans parishes
realized significantly greater declines in inequality than nonMMS parishes from 2001 to
2009, while declines were less pronounced in districts with high enrollments, more at risk
students, and more special education students. (Note that the positive sign does not
necessarily indicate increased inequality; it is possible that school performance inequality
was reduced but these factors significantly reduced the rate of reduction).
The fourth model demonstrates that the lower performing schools within each
district improved their scores, relative to the median score, in districts with a higher
percentage of at risk students. At the same time, districts with high percentages of
minorities and more of the adult population in poverty saw more limited change in the
relative performance of the weakest schools. Finally, the findings presented in the fifth
model suggest that significantly larger reductions in the percentage of low performing
schools occurred in Orleans and MMS parishes.
Discussion
This research sought to understand variation in school performance both within
and across school districts in the state of Louisiana. We tempered our initial interest in
11
interdistrict comparisons of intradistrict performance inequality with a consideration of
the issue of adequacy. On one extreme, a school district may have relative equity in
performance across schools within the district while, at the same time, every school in the
district is considered “low performing” by state-defined standards of student test scores.
In such a case the question of intradistrict equity across schools would take a back seat to
the larger question of why the performance is lacking throughout the district.
On the opposite extreme, another district may have a relatively large internal
differentiation in performance between its best and worst performing schools, but every
school (even the lowest performing school in the district) is considered by the state to be
“adequate”, meeting state and federally-defined minimum performance standards. In
cases such as these, stakeholders may appreciate the fact that district leaders were able to
bring all schools up to the minimum standards, yet still ask valid questions as to why
there is considerable intradistrict inequality.
Most Louisiana school districts lie somewhere in between these two extremes
and, therefore, need to be cognizant of both the districts’ performance relative to other
districts in the state and the various schools’ performances relative to each other within
the district.
Our results strongly suggest that intradistrict inequalities have declined during the
accountability/NCLB era. Three of the four indicators of inequality used (interquartile
range, coefficient of variation, and McCloone Index) suggest significant declines in both
the overall dispersion of school performance within districts, but also improvements in
the relative performance of schools at the bottom half of the range. At the same time,
relative to state standards, school districts have seen very little change in the proportion
12
of low performing schools. By and large, districts with no low performing schools in
2001 also had no low performing schools in 2009, those with all low performing schools
in 2001 also had all low performing schools in 2009, and the rest realized only marginal
increases and decreases in the percentage of low-performing schools.
A reasonable next step in this analysis would be to explore the relationship
between outcomes and, to the extent it is possible to get good data, school-level inputs.
The primary objective of this approach would be to use our knowledge of outcomes to
describe the extent to which districts have the capacity to use inputs efficiently (or to
spread them efficiently across schools). The data envelope procedure described by
Ruggeiro et. al. (2002) would be useful in this regard. The procedure allows for more
appropriate comparisons related to the efficient use of educational resources, because the
input/outcomes relationships are mediated by differences in educational costs (e.g. large
transportation costs in rural schools, high poverty costs, high proportion of students with
disabilities, etc.). This kind of analysis seems like a fruitful way to provide educational
policy and decision makers with valuable information rooted in cross district
comparisons of within district processes.
13
References
Belfield, Clive R. and Henry M. Levin. 2002. The Effects of Competition on
Educational Outcomes: A Review of the US Evidence. National Center for the
Study of Privatization in Education. Columbia University, New York.
Berne, Robert. 1994. “Educational Input and Outcome Inequities in New York State.”
Pp. 1-23 in Outcome Equity in Education, edited by Robert Berne and Lawrence
O. Picus. Corwin Press Inc: Thousand Oaks, California.
Hanushek, Eric A. and Woessmann, Ludger, Does Educational Tracking Affect
Performance and Inequality? Differences-in-Differences Evidence across
Countries (February 2005). NBER Working Paper No. W11124. Available at
SSRN: http://ssrn.com/abstract=667166.
Iatarola, P. and L. Stiefel. 2003. “Intradistrict Equity of Public Education Resources and
Performance.” Economics of Education Review 22:69-78.
Kaplow, Louis. 2005. “Why Measure Inequality?” Journal of Economic Inequality
3:65-69.
Murray, S. E., Evans, W. N., & Schwab, R. M.. 1998. “Education-Finance Reform and
the Distribution of Education Resources.” The American Economic Review: 88
789–812.
Rubenstein, Ross. 1998. “Resource Equity in the Chicago Public Schools: A School-level
Approach.” Journal of Education Finance 23, 4: 468-489.
Ruggiero, John, Jerry Miner, and Lloyd Blanchard. 2002. “Measuring Equity of
Educational Outcomes in the Presence of Inefficiency.” European Journal of
Educational Finance 142:642-652.
Schafer, Mark J. 2007. “School Accountability in Louisiana.” LSU Agricultural Center
Bulletin Number 887. LSU Agricultural Center.
14
Table 1: Descriptive Statistics for Intradistrict School Performance Inequality in 2001:
State, by Region, and by District Size (Large V. Small)
Indicator
State
South
North
Large
Small
Mean SPS
77.1
80.2*
74.0
80.6
75.8
Low Mean
45.2
48.3
45.2
48.3
45.2
High Mean
102.3
102.3
98.4
102.3
98.4
Median SPS
76.6
79.8*
73.5
80.0
75.4
Mean Number
Low Number
High Number
21
3
116
28
6
116
13
3
64
48
26
116
10
3
20
Mean IQR
Low IQR
High IQR
20.8
6.1
51.2
20.1
6.1
31.6
20.7
6.6
51.2
25.8
12.6
51.2
15.5
6.1
46.2
Mean S.D.
Low S.D.
High S.D.
14.7
3.7
32.1
15.1
5.4
28.8
14.2
3.7
32.1
18.8
9.4
32.1
13.1
3.7
25.8
Mean CoV
Low CoV
High CoV
.20
.05
.60
.20
.06
.60
.20
.05
.43
.25
.09
.60
.18
.05
.43
Mean McCloone
Low McCloone
High McCloone
.85
.60
.97
.85
.72
.97
.85
.60
.94
.82
.72
.91
.90
.60
.97
Mean LP Schools
Low LP Schoolsa
High LP Schoolsb
.35
0
1.00
.29
.41
0
1.00
.29
.37
0
1.00
0
.83
0
.83
Mean Enrollment
Minority %
At Risk%
Special Ed.%
466
48.0
63.5
12.8
520
43.0
62.0
13.7
411
53.0
65.2
11.9
559
46.0
60.5
13.6
431
48.8
64.7
12.5
Num of Districts
66
33
33
18
48
*significant at p<.05
a
Beauregard, Caldwell, Cameron, Catahoula, Jefferson Davis, LaSalle, Livingston,
Plaquemines, St. Charles, St. Tammany, Vernon, West Carroll, Winn
b
Madison, St. Helena, Tensas
15
Table 2: Descriptive Statistics for Change in Intradistrict School Performance Inequality:
2001- 2009:
Indicator
State
South
North
Large
Small
Mean SPS 2001
77.1
80.2
74.0
80.6
75.8
Mean SPS 2009
88.6
92.0
85.5
92.5
87.3
SPS Change
11.5
11.5
11.5
11.5
11.5
Low Change*
-7.6
-7.6
-4.7
1.7
-7.5
1
High Change
52.5
52.5
24.9
52.5
24.9
Mean IQR 2001
Mean IQR 2009
IQR Change
20.8
16.5
-3.5
20.1
16.4
-2.7
20.7
16.6
-4.0
25.8
19.0
-4.6
18.5
15.8
-3.0
CoV 2001
CoV 2009
CoV Change
.20
.14
.06
.20
.13
.06
.20
.14
.06
.25
.16
.08
.18
.13
.05
McCloone 2001
McCloone 2009
McCloone Change
.85
.90
.04
.85
.90
.04
.85
.89
.04
.81
.89
.06
.86
.90
.04
LP Schools 2001
LP Schools 20092
LP Change
.35
.33
-.02
.28
.24
-.06
.41
.41
0.0
.29
.27
-.02
.37
.34
-.03
Num of Districts
66
33
33
18
48
*
Cameron, Union, and West Carroll districts
Orleans Parish School District—due to loss of over half its schools to state takeover after
Hurricane Katrina
2
In 2009, 17 districts had no low performing schools, while four had all low performing
schools
1
16
Table 3: OLS Regression showing Determinants of Intradistrict School Performance
Inequality: 2001:
Indicator
Mean SPS
IQR
CoV
McCloone
Low
Performing
Mean SPS 2001
-----.15
---3.5**
NA
(-.97)
(2.88)
Enrollment 2001
8.71
-5.72
.03
-.04
-.07
(0.85)
(-.47)
(.30)
(-.42)
(-.28)
At Risk 2001
-14.64
-33.17*
-.23
.25*
.43
(-1.21)
(2.30)
(-1.81)
(2.16)
(1.48)
Minorities 2001
-44.06**
3.90
.23**
.03
.91***
(-6.24)
(.35)
(2.93)
(.33)
(4.96)
Special Ed 2001
.25
-28.13
-.59
.75*
.45
(.02)
(-.67)
(-1.58)
(2.20)
(.52)
Pupil-Teacher 01
-.05
4.09**
.02
-.02
.01
(-.04)
(2.82)
(1.92)
(-1.77)
(.24)
Revenues 01
3.91
-1.00
-.02
.01
-.00
(1.76)
(-.43)
(-1.25)
(.41)
(-.40)
Expenditures 01
-1.81
16.1*
.12*
-.10
.00
(-1.11)
(2.36)
(1.99)
(-1.75)
(.29)
Poverty 01
-6.32
31.14
.58*
-.24
.18
(-.26)
(1.1)
(2.33)
(-1.05)
(.31)
MMS
-1.4
-2.15
.02
-.00
-.00
(-.64)
(-.82)
(1.07
(-.09)
(-.00)
Large
2.83
(1.27)
Adj. R-Squared
Number of Cases
6.5
(2.48)
.73
66
.06*
(2.43)
.31
66
-.04*
(-2.15)
.50
66
-.03
(-.71)
.33
66
.70
66
17
Table 4: OLS Regression showing Determinants of Intradistrict School Performance
Inequality Change: 2001 - 2009
Indicator
Mean
IQR
CoV
McCloone
Low Perf.
Change
change
Change
Change
Change
Lagged D.V.
0
Mean SPS 2001
Same
0
0
0
NA
Enrollment 2001
0
+
+
0
0
At Risk 2001
0
0
+
+
0
Minorities 2001
0
0
0
0
Special Ed 2001
0
0
+
0
0
Pupil-Teacher 01
0
0
0
0
0
Revenues 01
0
0
0
0
0
Expenditures 01
0
0
0
0
Poverty 01
0
0
0
0
MMS
0
0
0
Large
0
0
0
0
0
Orleans
+
0
0
Adj. R-Squared
Number of Cases
.67
66
.50
66
.81
66
.69
66
.37
66
18
Download