Uploaded by Krystal Gennova Madrid

EducationEconomicsClassintheClassroom

advertisement
See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/233481152
Class in the classroom: The relationship between school
resources and math performance among low socioeconomic
status students in 19 rich countries
Article in Education Economics · May 2011
DOI: 10.1080/09645292.2010.511848
CITATIONS
READS
41
573
1 author:
Katie Baird
University of Washington Tacoma
21 PUBLICATIONS 104 CITATIONS
SEE PROFILE
All content following this page was uploaded by Katie Baird on 07 July 2016.
The user has requested enhancement of the downloaded file.
Education Economics
Vol. 20, No. 5, December 2012, 484–509
Class in the classroom: the relationship between school resources
and math performance among low socioeconomic status students in
19 rich countries
Katherine Baird*
Politics, Philosophy and Economics, University of Washington, Tacoma,
WA 98402-3100, USA
(Received June 2009; final version received May 2010)
Taylor and Francis
CEDE_A_511848.sgm
Journal
10.1080/09645292.2010.511848
0964-5292
Original
Taylor
02010
00
000002010
&ofArticle
Francis
Education
(print)/1469-5782
Economics
(online)
This paper investigates achievement gaps between low and high socioeconomic
students in 19 high-income countries. On average, math scores of students with
indicators of high socioeconomic status (SES) are over one standard deviation
above those with low SES indicators. The paper estimates the extent to which
these achievement gaps can be attributed to differences in classroom- and schoollevel resources available to students from different SES backgrounds. In some
countries, achievement gaps can be largely explained by differences in the
characteristics of schools attended. However, in many other countries, the gap
appears more closely related to differences in the characteristics of the students.
The results point to the importance of institutional difference among countries in
explaining international differences in the quality of education received by
different groups within a nation.
JEL classifications: I21; I28; I39
Keywords: math performance; educational policies; disadvantaged students;
international comparison
Inequality of outcomes among schools is a big reason why Chile’s income distribution
remains very unequal.1
Educational opportunity promotes social mobility … by distributing human capital in
many ways that are independent of social origins. (Beller and Hout 2006, 31)
Introduction
Educational systems are often scrutinized with respect to their role in contributing to
or weakening the intergenerational transmission of social status. One of the best internationally comparative measures of citizens’ human capital – student test scores –
shows that throughout the world, these scores are closely associated with students’
socioeconomic status (SES; Hanushek and Luque 2003; Schütz, Ursprung, and
Wöβmann 2005; Wöβmann 2003). Understanding why a link between SES and
youths’ human capital accumulation exists is clearly important for promoting social
mobility. This paper attempts to separate out the role of school policy and school
*Email: kebaird@uw.edu
ISSN 0964-5292 print/ISSN 1469-5782 online
© 2012 Taylor & Francis
http://dx.doi.org/10.1080/09645292.2010.511848
http://www.tandfonline.com
Education Economics
485
resources from family and student background characteristics to explain why low SES
students consistently underperform other students. Through an analysis of 2003 Third
International Mathematics and Science Study (TIMSS) math scores among eighth
graders in 19 high-income countries, we attempt to explain the relative performance
of low and high SES students (the achievement gap) by estimating the extent to which
differences in average school- and classroom-level resources between these groups (as
opposed to differences in student characteristics) can explain the gap in student performance by SES. We find that in some countries, the school-resource differences that
exist between socioeconomic groups can account for a sizeable share of the achievement gap. However, in other countries, these resource differences do not explain the
achievement gap, indicating that addressing the human capital side of social mobility
is a more challenging and complicated undertaking in these countries. The results also
point to the importance of institutional difference among countries in explaining international differences in the quality of education received by different groups within a
nation.
Background and literature review
International analyses of student test scores consistently find that everywhere, test
scores are closely associated with students’ SES (Hanushek and Luque 2003; Schütz,
Ursprung, and Wöβmann 2005; Wöβmann 2003). What is unclear is the extent to
which this association is due to factors specific to the formal educational system
versus factors outside of the system. Yet knowing that the association between SES
and educational outcomes varies by country (Cavanagh 2007; Wöβmann 2003)
suggests that an international comparison of different students’ performance within
countries may prove instructive. Student test scores from one of the most recent international test (the science PISA), for example, reveal that in the USA 18% of test score
variation among students can be explained by students’ socioeconomic characteristics, compared with an average among industrialized nations of 14%, and a low of
around 8% in Canada and Finland (Cavanagh 2007). One common criticism of the US
educational system is that students attend schools with different school- and classroom-level resources, depending on their SES background (Arroyo 2008; Cogan,
Schmidt, and Wiley 2001). For example, in one international comparison of the characteristics of schools attended by students whose mother had low levels of formal
education, such students in the USA attended schools with less endowed educational
resources relative to other students in the nation; and these gaps were larger in the
USA than they were in other countries (Baker 2003). Since many (although not all)
studies point to the importance of educational resources such as teacher quality on
student outcomes, one possibility is that school systems that result in a closer correlation between the quality of educational resources in schools and social status have
larger achievement gaps by social class (Betts, Zau, and Rice 2003; Education Policy
Center 2007; Goe 2007; Goldhaber and Brewer 2000). Alternatively, factors other
than differences in school resources might be more important in explaining a
country’s achievement gap.
The literature investigating and comparing different student performance patterns
within countries is quite recent. In an early effort to compare students’ performance
across different SES groups, Wöβmann (2004) examined 1995 TIMSS math and
science scores among middle school students in the USA and Western European
nations. He found that SES background was more strongly associated with students’
486
K. Baird
scores in Britain and Germany, while it was much less important in France and Flemish
Belgium. However, Wöβmann did not attempt to separate out the importance of family
background from the school attended, but rather focused on the combined effect of both.
Thus, while he is able to distinguish different patterns within nations, his study is not
able to identify the factors explaining why such differences exist.
A similar study by Schütz, Ursprung, and Wöβmann (2005) examined the role of
country-level features of school systems to explain variation in the link between SES
and test scores. However, they focused on the importance of differences at the
national level in explaining variation in average scores across nations, and not on
reasons for variation in scores within countries. Wöβmann (2003) similarly found that
differences in TIMSS scores between countries could be explained by institutional
differences between countries rather than differences in school resources between
countries, but he too does not address reasons for variation in scores within countries.
Other studies have used cross national datasets to investigate the link between test
scores, student background and school-level resources within countries (Wöβmann
2005a, 2005b; Wöβmann and Fuchs 2004), without making specific reference to
possible differences among social groups in the school-level resources made available
to them.
While some international research on student performance occasionally does
focus on outcomes for specific groups within a country – Jürges and Schneider (2004)
and Wöβmann (2005a) differentiate students in a country based on their TIMSS score,
and Doyle (2008) examines the role of class in explaining PISA scores in France and
England – very little research has investigated the factors explaining differences in
performance gaps across countries. Rather, studies commonly use cross national
datasets to investigate the link between test scores and school-level resources
(Wöβmann and Fuchs 2004; Wöβmann 2005a), without making specific reference to
differences among social groups in the sorts of school-level resources made available
to them. Given the somewhat stark differences across rich countries in the extent to
which citizens are socially mobile (Solon 2002), it is surprising that more focus has
not been placed on examining the possible contributions made by countries’ educational systems in explaining differences in social mobility.
Methodology and data
Measuring human capital acquisition among individuals is difficult, and in the past
has commonly been measured by educational attainment. Recent studies have shown
that cognitive skills, more than years of schooling, is a better predictor of individual
productivity (Hanushek and Kimko 2000), and research is establishing an important
link between cognitive skills (measured by test scores) and future career success (see
Hanushek and Luque 2003 for a discussion). A recent cross national study by
Hanushek and Wöβmann (2007) also concluded that measures of the quality of education in a country better predict changes in income distribution and economic growth
then does the quantity of education attained in these countries. For these reasons,
research assessing the link between educational systems and the human capital of citizens has increasingly focused on qualitative rather than quantitative measures of
educational attainment. For purposes of international comparisons, this typically
means using data obtained from PISA, PIRLS or TIMSS tests.
This paper’s analysis is based on eighth graders’ TIMSS math scores in 2003. To
limit the amount of variation among countries, the study uses the subset of countries
Education Economics
Figure 1.
487
Average eighth grade TIMSS math scores by SES and country (2003).
identified by the World Bank in 2005 as high-income economies.2 While some of the
‘countries’ in the study are in fact regions of countries, for simplicity in this paper we
refer to these regions as countries.3 Figure 1 shows the average TIMSS 2003 math scores
for high versus low SES students (the categorization of which are explained below),
with the difference between them what we refer to in this paper as the achievement,
or test-score, gap. Figure 1 shows, for example, that Hong Kong and Quebec have very
small achievement gaps, while Taiwan and Korea have quite large achievement gaps.
Our strategy for investigating the factors explaining these 19 gaps is to borrow
from a well-established technique labor economists use to identify the underlying
causes of differences in racial and gender outcomes. The main idea of the Oaxaca
decomposition is to quantify the extent to which differences in outcome means across
groups can be explained by differences in average characteristics within each group.
Typically, it will be used to estimate the contribution of differing individual characteristics between males and females, for example, in explaining gender wage gaps. For
our purposes, we use it to quantify the extent to which average math test-score gaps
between low and high SES students can be explained by differences in each groups’
average teacher, school and class characteristics – which together we henceforth refer
to as school-level resources – as compared to differences in average student
background characteristics.
In particular, we use the Oaxaca technique to estimate the counterfactual: in each
country, what would we estimate that on average low SES students’ math scores
would have been had they (on average) had the same teacher, school, and classroom
characteristics as had the average high SES students? In this way, we propose arriving at estimates of the importance of school-level versus individual and background
Figure 1.
Average eighth grade TIMSS math scores by SES and country (2003).
488
K. Baird
characteristics in explaining a country’s test-score gap. As discussed earlier, one
hypothesis is that larger achievement gaps within countries correspond with larger
SES-based differences in students’ access to schools with advantageous characteristics; once accounting for these differences, achievement gaps will be more similar
across countries. Our strategy is to measure the extent to which this hypothesis is
supported by the data.
To decompose the achievement gaps into the portion attributed to schools, and the
portion due to student background characteristics, we use regression analysis to
estimate separately for each country,4 the relationship between the characteristics of
students’ teachers, schools, math classes, and background characteristics, and these
students’ test scores. For each country, this estimation gives us a vector of βs associated with each of these explanatory variables. We next calculate the counterfactual
case where the average value of each school, classroom, and teacher characteristic for
low SES students was instead the average value of high SES students by multiplying
the difference in the average value between low and high SES students in the explanatory variable, by the estimated importance of this variable for student test scores (β).
In this way, we obtain an estimate of the portion of the test-score gap that can be
attributed to differences in average school-related characteristics between low and
high SES students.
To illustrate, let PS be the number of points in the total math test-score gap in a
country (Figure 1) that can be attributed to differences in the characteristics of low
versus high SES students’ math teachers, the schools they attend, and their math
classes. We estimate PS as:
PS = Σβi( X ih − X i1)
(1)
where each βi is the estimated marginal effect of the teacher, school, and classroom
characteristic Xi on test scores (discussed in greater detail below), and X ih refers to the
average teacher, school, and classroom characteristic i that high SES students experience, while X i1 refers to the average for low SES students. We similarly can calculate
PB, the test-score gap attributable to differences in low versus high SES student
background characteristics. We now decompose the gap in average test scores
between low and high SES students into that attributed to differences in PS (school
characteristics) and from that due to PB (student background characteristics).
Because the portion of the gap due to school (PS) and background (PB) characteristics together often underestimate the size of the achievement gap, we report the
estimated importance of school resources as:
(PS)
(PS + PB + PU)
(2)
where PU is the number of points that can be attributed to unmeasured factors in our
study. We use Equation (2) as an estimate of the percent of the gap that is attributable
to differences in school-level factors where unmeasured factors are positive (i.e., where
the model underestimates the test-score gap), thereby conservatively assuming that the
unmeasured factors are due to unmeasured student characteristics.5 For each country,
then, Equation (2) provides us with the estimated percent of the achievement gap that
is due to differences in the quality of school, teacher, and classroom characteristics
experienced by low versus high SES students in each country.
Education Economics
489
Before proceeding, we should point out two common methodological problems
with the analysis we undertake here. First, estimated βs may not be the same for low
and high SES students. If we thought that high SES students’ scores were more
sensitive to the characteristics of schools than were low SES students’ scores, then the
true impact of school-level differences would be overstated if high SES students
attend on average schools with better characteristics. If on the other hand low SES
students’ scores were more responsive to measures of school quality (as suggested
by findings reported in Jürges and Schneider 2004), then school effects would be
understated. Later in this paper, we consider this possibility and discuss our results in
the light of estimates testing for this possibility.
A more difficult methodological problem is establishing that the βs capture the
causal relationship between the variable and test scores, rather than reflecting omitted
variables. This problem of omitted variable bias is the most important shortcoming
of this study. One method for reducing this problem is to take account of the nested
structure of observations (individuals are not randomly assigned to schools, and
thus school variables are not truly independent of test scores) by using multi-level
statistical techniques. To this end, we use a hierarchical linear model (HLM) to
account for this structural feature of the data, because it accounts for the fact that
observations within a school may be correlated with one another. To illustrate, take
the following general linear model:
yit = β0 + β1 X 1ij + β 2 X 2ij + K + βm X mij + rij
( 3)
where i is the index for the individual, j is the index for the school, y is a
continuous outcome variable measuring students’ achievement or growth, X is a set of
m independent variables associated with individuals, and r is the error term where
rij ~ N (0, σ 2 ) .
Let W be a set of n school-level factors associated with the school j attended by
individual i, that influences β0 (the intercept) for each school. The equation,
β0 j = γ 00 + γ 01W1 j + K + γ 0 nWnj + µ0 j
( 4)
models this assumption. If the X variables are all centered on the grand mean for all
students, then β0j, the intercept β0 for each school j, is the school mean score, adjusted
for the characteristics of students in the school. The random element µoj is common
across students in a school. Combining (3) and (4) results in:
yit = γ00 + y01W1 j + K + γ0 nWnj + µoj + β1 X1ij + β2 X 2ij + K + βm X mij + rij
(5)
Equation (5) calls for a maximum likelihood estimation technique since the error term
(µ0j + rij) and its variance are school specific. HLM uses maximum likelihood, and
thus provides better and more efficient estimates than does a simple linear regression.
In this way, we reduce concerns over omitted variable bias by allowing for schoolspecific estimates.6 To further address this potential problem of omitted variables,
later in the paper we perform analyses based on subsets of students that we consider
to be more homogeneous, thus limiting differences among students in their unmeasured characteristics.
490
K. Baird
Data
A total of 85,387 students took the 2003 TIMSS math test in the 19 countries used in
this study (International Association for the Evaluation of Educational Achievement
2003). We select for those students who report being born in the country (bsbgborn =
1), and who at home primarily speak the test language (bsbgolan = 1 or 2), thereby
reducing the number of observations from 85,387 to 68,765, but also reducing the
influence on test scores of factors external to the country and its educational system.
Test scores
The TIMSS is a math and science test given every four years to fourth and eighth graders. Each participating school and country follows the same test-giving and datagenerating process so that comparisons in math and science performance within and
across country can be made. In 2003 among eighth graders, the average country
TIMSS math score was 467, and ranged from 605 in Singapore to 264 in South Africa.
Of the 68,765 students in the 19 countries sampled in this study, the average
(weighted) score was 530, with a standard deviation of 72.
In addition to collecting test-score data for representative samples of students,
extensive data on family, school, teacher, and classroom characteristics are collected
on each participating student. TIMSS data also include a student weight for each
student since some student characteristics are oversampled, and others undersampled.
All of the results reported here are based on weighted observations. Table 1 contains
summary statistics for each country.
Table 1.
Summary statistics by country; weighted by student weight.
Australia Basque Belgium England Hong Kong
Italy
No. observations
Average math score
2013
507
1456
491
2685
546
864
503
1353
590
3306
486
Teacher characteristics
Years teaching
25 years or younger
Studied math
Certificate in math
% teach math
Low morale
Highest education attain
15.9
0.05
0.59
0.88
0.7
0.14
0.5
21.3
0
0.25
18.7
0.11
0.97
15.5
0.04
0.75
12.1
22.6
0.67
0.08
0.35
0.83
Math class
Class size
Min math-week
Math text
Taught TIMSS curriculum
Student characteristics
Female
HH size
26.4
210
0.95
70.8
0.53
4.5
24
210
0.9
67.1
0.51
4
0.94
0.17
0.17
0.61
1
0.68
0.14
0.09
0.21
0.95
0.59
0.15
0.06
20.2
216.8
0.92
63.4
27.1
182.3
0.86
83.2
37.9
259.8
0.99
76.8
21.5
227
0.96
79.3
0.55
4.4
0.51
4.4
0.49
4.4
0.5
4.3
491
Education Economics
Table 1.
(Continued).
Australia Basque Belgium England Hong Kong
Mother born in country
Father born in country
Age
Low SES
High SES
Try best
Little math homework
0.77
0.74
13.9
0.04
0.33
0.16
0.18
School characteristics
Size eight grade
Vacancy hard to fill
Low school climate
Low school attend
Math resource
Language
Different math curriculum
Disadvantaged population
Advantaged population
161
0.06
0.07
0.11
0.01
0.65
0.21
0.09
0.17
0.98
0.98
14.1
0.04
0.25
0.21
0.17
0.95
0.93
14
0.12
0.12
0.13
0.25
0.92
0.9
14.3
0.12
0.25
0.12
0.08
0.65
0.64
14.1
0.23
0.1
0.11
0.34
77.4 156.7
0
0.11
0.09
0.07
0.1
0.04
0.01
0.01
0.49
0.8
0.11
0.3
0.07
0.02
0.24
0.65
213.3
0.26
0.05
0.12
0.08
0.83
0
0.13
0.16
205.3
0
0.15
0.04
0.02
0.91
0
0.27
0.05
Italy
0.97
0.98
13.9
0.12
0.2
0.16
0.09
151
0.12
0.05
0.02
0.78
0.05
0.09
0.12
Japan Korea Netherlands New Zealand Norway Ontario
No. observations
Average math score
3303
571
4155
591
1406
545
2326
496
3622
Teacher characteristics
Years teaching
25 years or younger
Studied math
Certificate in math
% teach math
Low morale
Highest ed attain
16.9 12.6
0.02 0.01
0.81 0.3
0.99 0.98
0.82 0.97
0.13 0.1
0.05 0.25
16.1
15.6
17.7
0.01
0.4
0.96
0.34
0
0.1
10.5
Math class
Class size
Min math-week
Math text
Taught TIMSS curriculum
35.2 36.7
159
177
0.98 0.97
73.5 81.1
25
153
1
71.3
73.9
26
181
1
54.5
24.1
267.6
0.99
79.3
Student characteristics
Female
HH size
Mother born in country
Father born in country
Age
Low SES
0.5
0.49
4.8
4.3
0.99 1
1
1
14.4 14.6
0.13 0.15
0.54
4.7
0.85
0.84
14.1
0.07
4.4
0.94
0.94
13.8
0.05
0.52
4.5
0.8
0.77
13.8
0.06
0.49
0.76
0.03
0.09
0.51
4.5
0.91
0.89
14.2
0.07
0.5
1
0.72
0.08
0.37
24.4
214.9
1527
517
0.09
1
0.31
0.12
0.11
492
Table 1.
K. Baird
(Continued).
Japan Korea Netherlands New Zealand Norway Ontario
High SES
Try best
Little math homework
School characteristics
Size eighth grade
Vacancy hard to fill
Low school climate
Low school attend
Math resource
Language
Different math curriculum
Disadvantaged population
Advantaged population
No. observations
Average math score
Teacher characteristics
Years teaching
25 years or younger
Studied math
Certificate in math
% teach math
Low morale
Highest education attained
Math class
Class size
Min math-week
Math text
Taught TIMSS curriculum
Student characteristics
Female
HH Size
Mother born in country
Father born in country
Age
Low SES
High SES
Try best
Little math homework
0.17
0.09
0.25
0.19
0.1
0.3
164
0.02
0.03
0.44
0
1
0.01
0.01
0.42
366
0.02
0.15
0.01
0.03
0.99
0.08
0.1
0.04
Quebec
Scotland
1939
549
1000
518
18.5
14.1
0.73
0.09
0.35
0.2
0.21
214
0.12
0.16
0.19
151.6
0.29
0.29
0.1
110
0.05
0.1
0.05
0.91
0.68
0.7
0.06
0.28
Singapore Sweden
Taiwan
0.18
0.18
83.6
0.06
0.08
0.07
0.05
0.54
0.12
0.08
0.26
USA
1845
622
2119
512
3542
600
3303
517
11.39
16.3
14.3
15.2
0.35
1
0.86
0.04
0.12
0.85
1
0.69
0.08
0.06
0.85
0.12
0.13
26.8
259
0.95
69.4
25
224
0.93
69
0.49
0.88
0.85
14.1
0.12
0.53
4.3
0.97
0.96
13.7
0.11
0.52
5
0.89
0.88
14.3
0.05
0.11
0.16
0.21
0.15
0.3
0.12
37.8
229
1
81.6
0.59
1
0.4
0.03
0.3
0.79
1
0.84
0.01
0.15
0.52
1
0.81
0.09
0.68
36.6
213
0.95
71.7
23.1
225
0.97
82.8
0.52
4.3
0.9
0.91
14.9
0.05
0.5
4.9
0.98
0.98
14.2
0.14
0.54
4.5
0.92
0.91
14.2
0.09
0.07
0.47
0.16
0.18
0.17
0.15
19.7
157
1
58.9
493
Education Economics
Table 1.
(Continued).
School characteristics
Size eighth grade
Vacancy hard to fill
Low school climate
Low school attend
Math resource
Language
Different math curriculum
Disadvantaged population
Advantaged population
Quebec
Scotland
177.7
0.13
0.11
0.11
0.03
0.55
0.18
0.11
0.2
164
0.09
0
0.09
0
0.91
0.07
0.27
Singapore Sweden
332
0
0.05
0.04
0.02
0.6
1
0.06
0.15
113
0.02
0.07
0.37
0.63
0.11
0.05
0.67
Taiwan
USA
594
0.01
0.04
0.05
0.06
0.44
0.06
0.02
0.07
256
0.04
0.08
0.1
0.01
0.79
0.59
0.17
0.13
Since students only take a portion of the TIMSS math test, for each student a
probability distribution of test results is generated, from which five plausible values
are sampled. HLM results in this paper are based on the results generated by using all
five plausible values, while OLS results are based on the average of each student’s
five plausible test-score values.7
Socioeconomic status
TIMSS does not collect information on family income or occupation. Typically,
researchers using TIMSS or other international datasets capture SES through the
parents’ educational attainment, through the number of books that students report
in the house or through both (Schütz, Ursprung, and Wöβmann 2005; Wöβmann
2004, 2005a, 2005b). In this study, we capture students’ SES by the reported
number of books in each student’s home. This provides us with many more observations than would parent education, since students were much more likely to
answer the question about the number of books in the home than they were the
question about their parents’ educational attainment.8 Moreover, some argue that
books in the home may be a better measure of the extent to which academic
achievement is valued, and thus better captures the role that differences in the
home environment play in terms of youths’ outcomes. Prior TIMSS reports have
consistently shown a very strong correlation between books in the home and test
scores (Beaton et al. 1996; Mullis et al. 2000). In explaining this, Beaton et al.
write ‘The number of books in the home can be an indicator of a home environment that values literacy, the acquisition of knowledge, and general academic
support’ (1996, 99).
We label a student as high SES when she reports that her household had at least
three cases of books in the house (bsbgbook = 5), and as low SES where she reported
that the household had at most 10 books in the house (bsbgbook = 1). In this paper,
then, the achievement gap in a country (shown in Figure 1) refers to the difference in
average math scores between these two groups of students. On average, about 10% of
students in a country have indicators of low SES, although this percent varies from
about 4% to 23% (see Table 1).
494
K. Baird
Math teacher characteristics
Characteristics of students’ math teachers during the eighth grade may be important
factors in explaining students’ math performance. Students may learn less math when
their teacher is younger and more inexperienced, has a weaker background in math,
devotes less of the school day to math instruction versus other obligations, or has low
morale. The TIMSS data include a wide range of information on the math teacher of
each student who takes the math TIMSS test. For this study, the teacher-related variables that we included are (a) number of years teaching (btbgtaut), (b) a dummy variable constructed for teachers who are 25-year-old or younger (btbgage = 1), (c) a
dummy variable constructed if the teacher has a degree in math (btbmpsma = 1), (d)
a dummy variable constructed if the math teacher has a teaching certificate or license
(btdgtelc = 1), (e) the percent of total periods in the day that the teacher spends
teaching math (btbmsptm/btbgsptt), (f) a dummy variable if the teacher has a high
level of educational attainment (btbgfedc = 6), and (g) a dummy variable if the teacher
reports low or very low job morale (btbgchts = 4 or 5).
School and classroom characteristics
School and classroom characteristics may also promote or hinder the learning of
math. Larger classrooms with more frequent disruptions, classrooms or schools with
less studious students, with poorer math-related resources or with a more challenging
population of students to teach may lead to less learning during the school year. To
capture such factors, we include the following variables, as reported by either the
student’s teacher or the student’s school principal: (a) the number of students in the
math class (btbmstud), (b) the minutes spent each week on math instruction
(btbmtimt), (c) a dummy variable indicating that a math textbook is used in the
classroom (btbmtbtc = 1), (d) the total number of students in the eighth grade
(bcbgeenr), (e) a dummy variable indicating that the principal reported more than
50% of students in the school came from a disadvantaged background (bcbgsbed =
4), (f) a dummy variable where the principal reported that more than 50% of the
students came from an advantaged background (bcbgsbea = 4), (g) a dummy variable
where the principal reported that math vacancies were very difficult to fill (bcbmfvay = 4), (h) a dummy variable where the principal reports a low school climate
(bcdgch = 3), (i) a dummy variable indicating that the principal reports low school
attendance (bcdgsp = 3), (j) a dummy variable where the principal reports a low
availability of resources for math instruction at the school (bcdmst = 3), (k) a
dummy variable where the principal reports that at least 90% of the students at the
school speak the test language (bcbgnala = 1), (l) a dummy variable where the principal reports that students of different ability levels receive different math curriculum
(bcbmodla = 3), and (m) the percentage of students in each class who are taught the
TIMSS math curriculum (btdmtoov).
Student background characteristics
To capture student background characteristics, in addition to measures of SES
discussed above, we include the following information that students provides about
themselves: (a) the number of people in the student’s household (bsbgplho), (b)
dummy variables indicating that the mother or father was born in another country
(bsbgfbrn = 1 or bsbgmbrn = 1), (c) the student’s age in years (also entered as a
Education Economics
495
squared value) (bsdage), (d) a female dummy variable (itsex, where 1 = girl), (e) a
dummy variable where the student reports spending less than 15 minutes daily on
math homework (bsbmhwmg = 1), and (f) a dummy variable where the student reports
always giving their best effort (bsbgattb = 1).
Findings and discussion
The results of the analysis discussed below are based on country-specific regression
analyses where all of the above characteristics of students and their schools are
regressed on students’ TIMSS math scores. The analysis is based on country-specific
rather than a pooled analysis because in pooled regressions, the country fixed effects
pick up most of the explanatory value, and a test of the hypothesis that the coefficients
are identical among countries is rejected. Given the amount of variation among countries and the educational context within countries, this makes sense. Most likely for
these reasons, using country-specific production functions is standard practice when
comparing student performance across countries (Schütz, Ursprung, and Wöβmann
2005; Wöβmann 2004, 2005a, 2005b). The βs that result from the country-specific
regressions are used as described above in Equations (1) and (2).
All students
Our first attempt at measuring the importance of SES-based differences in schoollevel resources in explaining a country’s achievement gap is based on a two-level
HLM model that accounts for the non-random nesting of students in classes. This
analysis is based on the 41,764 of the 68,765 students in our sample for whom data on
all variables described above exist. The TIMSS data do not have enough classes
within the same school to permit a three-level hierarchical analysis, which would
account for the non-random nesting of classes within each school. For this reason,
school-level features are entered at the classroom level.
Table 2 reports the coefficient estimates by country, and Table 3 summarizes the
findings by country. The first column of Table 3 presents the achievement gap in
terms of average point differences between low and high SES students, while the
second column presents the gap in percentage terms. The third column, based on our
estimates using HLM and all students (Table 2), presents our estimate (using
Equation 2) of the percent of the gap that can be explained by differences in schoolrelated resources. Included in our estimation are all of the school-related resources
(characteristics of teachers, classes, and schools) discussed above. In Taiwan for
instance, the estimate using HLM finds that 10.2% of the 126-point achievement gap
can be explained by the differences in school-level resources between high and low
SES students, while the other 89.8% can be explained by observed and unobserved
student characteristics.9 In eight countries (Korea, the Basque Country, New Zealand,
Norway, Australia, Ontario, Japan, and Italy), school-level differences in resources
explain less than 10% of the country’s achievement gap. On the other hand, in four
countries (Singapore, England, the Netherlands, and Hong Kong), differences among
the schools attended by low and high SES students can account for over 40% of the
achievement gap (see Table 3, Column 3).
Apart from these different patterns, Column 3 of Table 1 also shows that there is
no obvious link between the size of a country’s test score gap (Columns 1 and 2) and
the role of differences in school-level resources in explaining this gap. Among the
2.94
0.27
−0.23
0.37
0.87
−2.21
−1.51
2.61
1.45
−0.36
0.11
−3.12
−17.1
−13.1
0.0
4.3
−0.5
−32.8
4.2
0.0
31.1
1.78
0.13
0.04
−1.77
−0.05
−1.88
Teach
<25
−23.8
20.6
Years
Age
Coefficient estimates.
Taiwan
Korea
Sweden
Basque
Singapore
England
Scotland
Netherlands
USA
New Zealand
Norway
Australia
Indiana
Ontario
Japan
Belgium
Italy
Quebec
Hong Kong
Country
Table 2.
0.05
0.02
0.06
−0.06
−0.01
0.07
0.00
−0.03
0.10
0.01
−0.02
−0.02
−0.08
0.01
Teach2
Years
30.4
1.3
0.2
−2.4
3.5
15.5
9.4
22.4
6.2
23.8
−3.8
3.1
−12.6
5.4
6.1
13.6
−13.2
16.7
−13.9
Degree
Math
8.9
25.5
−35.7
1.0
2.6
−14.1
19.5
Cert
Math
Teacher
−48.9
12.5
−8.7
8.7
33.6
−262.0
−40.7
41.8
1.0
4.1
15.2
36.5
73.0
−20.6
−9.7
10.1
23.8
−20.5
77.7
Period
Math
2.6
15.3
32.7
14.4
0.7
15.9
−0.7
−33.4
62.7
−22.1
−3.1
1.2
14.2
−8.5
0.8
2.0
2.6
Attain
Education
−1.3
11.4
−17.0
−4.5
−45.0
−4.6
19.6
−25.3
25.6
14.5
−29.9
−7.4
10.1
19.3
−15.7
−29.7
−35.6
Morale
Low
−22.8
−11.4
−13.9
−24.8
−5.2
−3.2
−14.1
−11.8
−15.5
−24.9
−20.6
−16.7
−13.9
−19.8
−12.7
−6.8
−14.6
−13.3
−8.8
Hard
Try
4.1
5.3
6.9
−30.9
8.1
8.4
6.4
−4.7
−4.7
−1.6
5.2
7.1
−0.8
14.1
10.8
12.8
7.5
−9.4
Home work
Little
−3.58
0.20
−0.37
−0.50
0.16
−1.46
1.40
1.46
0.69
−0.70
−0.11
−0.66
−1.77
−0.55
−1.36
2.02
−1.84
0.84
0.89
Size
HH
Student
31.18
2.58
5.91
−3.49
−0.65
1.33
1.51
14.31
2.06
−8.06
5.41
−4.69
−9.97
−9.02
2.35
10.39
0.98
3.98
2.05
Born
Mother
17.88
11.04
7.23
8.53
−1.21
−1.26
5.98
6.28
3.16
2.34
14.94
4.29
−3.70
9.98
37.85
9.85
13.76
4.26
−8.36
Born
Father
496
K. Baird
484
695
35.9
181
125
57.4
320
−183
25.6
0.7
727
391
239
1258
322
84.1
335
15.0
−25
Age
(Continued).
Taiwan
Korea
Sweden
Basque
Singapore
England
Scotland
Netherlands
USA
New Zealand
Norway
Australia
Indiana
Ontario
Japan
Belgium
Italy
Quebec
Hong Kong
Country
Table 2.
SES
38.6
50.6
30.8
18.3
4.03
4.22
17.8
−3.1
11.6
15.5
24.4
18.0
9.18
18.5
26.5
3.95
27.3
6.24
−0.6
Age2
−16.8
−23.5
−1.26
−7.11
−4.55
−2.06
−11.6
6.09
−1.19
0.16
−26.4
−14.2
−8.57
−45.6
−10.9
−3.17
−12.
−0.89
0.61
High
Student (cont)
−7.58
−7.87
−2.96
1.13
−6.28
−3.56
−5.14
−11.47
−6.03
−1.88
−0.63
−6.72
−14.12
−3.45
−6.25
−9.36
−13.53
−12.10
−11.35
Female
0.50
0.87
4.20
0.27
0.68
7.21
6.59
6.25
0.58
1.78
−0.09
3.09
−1.29
−0.12
1.23
1.53
1.02
1.51
6.42
Size
Class
0.12
0.06
0.19
0.16
−.04
−.21
−.05
0.25
−.07
0.00
0.01
−.02
−.19
−.10
0.34
0.51
0.14
−.13
0.02
Math
Min
−6.2
40.5
42.1
8.52
30.1
11
−12
−21
2.1
10.4
12.1
Text
0.94
0.57
0.44
1.18
0.38
0.08
1.16
0.08
3.17
1.49
0.89
1.53
1.20
0.30
0.20
0.05
1.23
0.58
TIMSS
Taught
0.01
0.06
0.04
0.05
0.18
−.76
−.27
0.03
0.03
−.01
0.03
−.20
−.04
0.05
−.02
−.05
0.00
0.01
−0.3
Size
School
Class
33.3
−18
10.0
3.19
0.54
16.7
35.5
24.41
−41.8
−41.5
−12.2
−7.3
−18.1
−29.2
−3.9
19.73
23.26
1.47
15.7
16
23.1
48
6.11
10.0
Adv
School
−118.4
−26.6
−58.6
3.5
−11.9
−39.4
−5.1
−42.4
−31.5
Disadv
School
−22
−14
−44
−2.3
−33
10.4
−61
63.6
−15
−2.6
Vac
Math
Education Economics
497
(Continued).
Taiwan
Korea
Sweden
Basque
Singapore
England
Scotland
Netherlands
USA
New Zealand
Norway
Australia
Indiana
Ontario
Japan
Belgium
Italy
Quebec
Hong Kong
Country
Table 2.
13.60
4.32
14.93
5.81
−3.42
−55.3
7.05
−15.62
−13.87
−5.88
−12.67
−6.48
−9.60
−28.93
−4.74
−3.04
−59.48
−24.73
−23.53
−20.5
3.40
−16.5
13.22
−18.4
0.44
−19.1
−17.8
−7.86
11.99
−56.7
−16.4
−41.8
−15
−4.06
−9.56
25.40
9.00
−3.43
0.12
15.13
53.19
13.57
−10.
3.02
−4.60
−5.25
15.54
13.91
23.57
26.49
4.41
Lang
−22.62
−3.91
−15.6
Res
Attend
School
Clim
Math
Low
Low
Class (cont)
0.48
9.76
13.27
21.31
19.00
−10.2
11.75
1.27
−4.04
−2.51
Curriculum
Difference
498
K. Baird
499
Education Economics
Table 3. Achievement gap and percent of gap explained by school resources and country, and
with estimation technique.
Achievement gap
(All)
Taiwan
Korea
Sweden
Basque
Singapore
England
Scotland
Netherlands
USA
New Zealand
Norway
Australia
Indiana
Notario
Japan
Belgium
Italy
Quebec
Hong Kong
Percent of gap
explained by model
Achievement
gap (H)
Percent gap
explained
Points
Percent
All HLM
Low SES
Points
H Type
126
102
83
79
89
93
88
76
91
88
73
76
73
62
71
63
66
40
41
20
16
16
16
14
17
16
13
17
17
15
14
14
11
12
11
13
7
7
10.2%
7.2%
24.2%
8.2%
44.8%
81.2%
33.1%
55.1%
24.9%
6.0%
2.1%
2.3%
22.8%
4.8%
4.4%
28.6%
9.7%
36.9%
59.1%
6.7%
7.0%
21.1%
3.2%
18.4%
104%
10.0%
42.9%
16.4%
0.0%
−4.1%
1.3%
21.3%
9.0%
4.7%
22.5%
8.6%
50.0%
61.9%
19.5
22.1
53.1
16.8
57.9
80.4
62.7
46.5
49.7
30.8
7.6
26.6
51.7
6.3
2.9
22.0
18.5
17.0
8.9
10.0%
6.5%
22.4%
5.4%
34.8%
77.2%
30.8%
48.8%
20.9%
6.4%
2.0%
14.2%
29.6%
3.6%
5.9%
28.8%
8.9%
32.9%
48.1%
19 countries, there is virtually no correlation between the size of the test score gap and
the extent to which differences in school resources between low and high SES
students can explain these gaps. This is a surprising result; we had expected that
countries with larger achievement gaps would correspond with greater social classbased differences in educational opportunities. But our findings suggest that the
ability of differences in school-based resources to explain a country’s achievement
gap is not related to the size of the gap. It thus appears that we must look to other
reasons to explain why some countries have much larger achievement gaps than other
countries.
One potential problem with the estimates derived above is that the approach
presumes that the marginal effect of school resources on low SES students is the same
as it is with high SES students. To explore the importance of this assumption, the next
section presents the results of an analysis identical to that performed above, but where
the marginal effects of resources (βs) are estimated using data from the subset of
students in each country with a low SES marker.
Low SES students
It is possible that low SES students’ test scores are more responsive to school
resources than are high SES students. If true, then the previous estimates that are
based on the average effect of school resources might underestimate the true effect of
500
K. Baird
resources on low SES students’ test scores. On the other hand, for reasons of selfselection discussed above, the estimates based on all students may be picking up a
correlation between inputs and unobserved student characteristics. If true, then the
estimates based on all students would most likely overestimate the effect of school
resources on the test-score gap. In this section we use low SES students only to estimate the importance of differences in school resources in explaining the test-score
gap; we do this by estimating the marginal effect of school and individual characteristics in our regression analyses using only low SES students (N = 3582). If the use of
this subset reduces the estimated importance of school resources, this would support
the contention that our previous estimates have overestimated the importance of
school resources in explaining the achievement gap because low SES students test
scores are less responsive to school characteristics than are other students (or the estimates are biased upwards by problems of omitted variables).
In Table 3, Column 4 presents the same estimates contained in Column 3, with the
difference that the estimated effect of differential school resources on explaining the
test-score gap (Equation 2) is calculated based on ß estimates derived from countrylevel regressions using only low SES students’ data.10 The results are generally quite
consistent with those estimated using all students. In only five countries does the
difference between the two estimates differ by more than 10 percentage points (more
than 10 percentage points smaller in Singapore, Scotland, and the Netherlands, and
more than 10 percentage points larger in England and Quebec). In the other 14 countries, using the more homogenous set of low SES students to reduce problems of omitted variables and to account for possible differences between low SES students and all
students suggests that we can have more confidence that estimates based on all
students are not inflated or deflated by problems of omitted variables or heterogeneity
in the student population.
Above average achievers
A second way of testing the consistency of the estimates is to find another subset of
students that may be more uniform in their characteristics and in their response to
school resources. In this section, we attempt to do this by eliminating from the
analysis about half of all the students in each country. While the precise justification
developed below for choosing which students to retain may not in fact hold true, the
exercise here can simply be seen as a way to gain a more homogeneous subset of
students to examine – which we believe it does in practice, even if in the end it is not
because of the reasoning developed below.
Imagine there are two types of students, high achievers (H) and low achievers (L),
where the difference between the two are in students’ level of motivation, ability, and
effort expended on school. If H-type students’ scores are more responsive to school
resources, and L-type students less responsive, then estimates of the effects of school
resources on all students’ scores will overestimate their effect on L students, and
underestimate it on H students.11 And if L-type students are more prevalent among
low SES students, then the overall estimate of the effect of resources on the test score
gap will be overestimated.
In a final analysis, we test our findings by using a subset which might reasonably
be thought of as consisting primarily of H students, to estimate the importance of school
resources on test scores. We then identify the portion of the achievement gap due to
differences in school characteristics. We identify H students as those where the residual,
Education Economics
501
unexplained test score that remains after their and their school’s characteristics are
taken into account, is positive.12 This means that approximately half of the students in
each country will be considered H students, with the other half being L students. We
next conduct country-level analyses identical to those discussed above to identify the
impact of resource differences on explaining the test-score gap.
Before proceeding, we comment on changes in the achievement gap once students
are divided into L and H students. Column 5 in Table 3 shows the achievement gap
among only those we identify as H students in each country. In all countries, the
average test-score gap between low and high SES students narrows considerably when
students are restricted to this H-type student population. To take an extreme example,
we find that the 126 achievement-point gap in Taiwan (the largest gap in our study)
shrinks to only 19.5 points when restricted to H-type students. Only in Singapore,
Sweden, England, Scotland, Netherlands, the USA, and Indiana does the gap remain
at least half the size it is when measured among all students. In most countries then,
unmeasured student (and possibly school) characteristics – whatever it is that is
distinguishing L from H students – is a significant factor in explaining the achievement gap in each country.
That noted, we use our subset of H students to check our previous results for
consistency. H students will be more similar to one another than will all students. This
set thus reduces the potential problem of omitted variables and can give us a more
reliable estimate of the importance of student and school characteristics on test scores.
On the other hand, the estimates may differ from previous ones because while having
addressed the problem of omitted variables, these students may be more responsive to
school-level resources than are L students, in which case we might expect this subset
to generate higher estimates of the importance of school-level factors in explaining
achievement gaps.
Thus, our third and final analysis is based on β coefficients derived from
regressions using only these subsets of students, and is otherwise identical to the
previous two analyses. Since our β estimates come from H students, the estimates can
be seen as a hypothetical question: how big would the achievement gap be in each
country if low SES students had the same school resources as high SES students and
responded to school resources like our subset of H students in their country do.
Our analysis based on this H subset of students reveals results quite consistent with
our initial estimates (Table 3, Column 6). With this subset, in only three countries does
the estimate of the importance of differences in school resources change by 10 percentage points or more – 10 lower in Singapore, 12 higher in Australia, and 11 percentage
points lower in Hong Kong. By and large, the estimates are remarkably similar to
previous estimates. This support for our original estimates strengthens the belief that
the original ones are reasonable ones, and not an artifact of omitted variables.
Returning, then, to our estimates of the sources of the achievement gap using data
on all students, Figure 2 provides a graphical representation of our estimates of the role
of school-resource differences in explaining the achievement gap in the 19 countries
(‘explained gap’). In some countries such as England, the Netherlands, and Singapore,
a significant share of these countries’ achievement gap can be explained by differences
in school resources available to low versus high SES eighth graders. Our estimates
indicate that in the Netherlands and England, were low SES eighth graders provided
with the same school resources as are high SES students in these countries, these low
SES students would perform as well as their SES counterparts do in the highest scoring
countries of Taiwan, Korea, Japan, and Quebec. On the other hand, in other countries,
502
Figure 2.
K. Baird
Average achievement gaps adjusting for differences in school resources.
notably Taiwan, Korea, the Basque Country, Norway, Australia, and Ontario, resource
differences do not explain much of the achievement gap. In these countries, we
estimate that providing these students with the same resources as their high SES
counterparts in their country have would not improve their math performance.
Table 4 further summarizes our results. We distinguish between countries where
differences in school-level resources can account for at least 30% of the achievement
gap (‘resource differences matter’) and those where they account for less than this
(‘resources don’t matter’). We further distinguish between higher scoring countries
(where the average TIMSS math score was at least 500) and lower scoring countries
(below 500). As shown, we estimate that resource differences matter in some high and
some low scoring countries, and that they also don’t matter in some high- and lowscoring countries.
While a general pattern is thus not obvious, one pattern we do find relates to our
previous discussion of H and L students. Column 3 shows the percent by which the
achievement gap is reduced when measured strictly among H students than when
measured among all students. As shown, among countries where resources don’t
matter, the gap among H students is significantly lower than that among all students
(on average 69% lower), whereas in countries where resources matter, the gap among
H students in on average only 42% lower than that among all students. In other words,
whatever the unobservables are that distinguish L from H students within countries,
we find an interesting pattern where these unobservables matter most in countries
where measured features of schools do not explain achievement gaps.
Column 4 presents one final piece of information. It shows the percent of low SES
students in each country that we identify as also being under performing (L) students
(or more accurately, having disadvantageous unobservables). As shown, in countries
where resources do not matter, low SES students are disproportionately also L
students (on average 64% of them are also L students). In countries where we find that
Figure 2.
Average achievement gaps adjusting for differences in school resources.
Higher-scoring countries
Taiwan
Japan
Ontario
Indiana
USA
Belgium
Korea
Australia
Lower-scoring countries
England
Scotland
Average all ‘resources matters’ countries
Resource differences don’t matter
585
570
521
508
504
537
589
506
498
497
544
605
536
586
543
Average
TIMSS score
20
12
11
14
17
11
16
14
17
16
12
14
13
7
7
Achievement
gap (%)
85
96
90
29
45
65
78
65
14
29
42
35
39
78
58
Achievement gap
(Reduction among
H students)
Characteristics of countries by importance of school resources in explaining achievement gap.
Resource differences matter
Higher-scoring countries
Singapore
Netherlands
Hong Kong
Quebec
Table 4.
68
58
67
49
66
58
66
78
56
62
56
53
59
53
51
% of low SES
below Average (L)
10
4
5
23
25
29
7
2
81
33
52
45
55
59
37
Importance of school
resources on
achievement gap (%)1
Education Economics
503
(Continued).
1
From Column 3 Table 1.
Lower-scoring countries
Basque
Norway
Sweden
New Zealand
Italy
Average all ‘resources don’t matter’
countries
Table 4.
487
461
499
494
483
519
Average
TIMSS score
16
15
16
17
13
15
Achievement
gap (%)
79
90
36
65
72
69
Achievement gap
(Reduction among
H students)
56
77
64
66
63
64
% of low SES
below Average (L)
8
2
24
6
10
12
Importance of school
resources on
achievement gap (%)1
504
K. Baird
Education Economics
505
resource differences do help to explain the achievement gap, on average only 56% of
low SES are also L students.
Thus, we see an interesting pattern that in countries where resource differences do
not explain gaps, what does explain them is the unmeasured characteristics of students
– low SES students have particularly disadvantageous unmeasured characteristics. On
the other hand, in a subset of these countries we find that the achievement gap can be
explained by SES-based differences in school resources. In these countries, we consistently find that about one-third to three-fourth of the achievement gap (52% on average) can be explained by differences in the characteristics of the schools attended by
low versus high SES student.
In contrast, in the resource doesn’t matter category of countries, disadvantageous
factors not measured in the study – be it lack of effort or something else – are strongly
associated with low SES status; and it is this rather than the measured features of the
school systems that explain the achievement gap. In Taiwan, Korea, Norway, and Italy
(‘resources don’t matter’ countries), for example, the achievement gap nearly
disappears among H students; however few low SES students are an H-type student.
In some high-performing systems such as Taiwan, Korea, and Japan, this finding
might possibly be illustrated by the practice among families with money of sending
lower performing students to what are called bu xi ban in Taiwan or juku in Japan.
These ‘cram’ or ‘shadow’ schools are designed to help students keep up with the
content of their school work, especially in math (Baker et al. 2001; Lee 1999). Those
without resources in these high-performing countries may not have recourse to such
schools, and thus may not get the additional help that their more advantaged peers
acquire to keep up with others. What the explanation for this pattern might be in the
lower performing countries of the Basque Country, Norway, Australia, and Italy is
less clear, but the evidence is consistent with the general explanation that schools
systems in these countries provide fairly equal opportunity for low and high SES
students, but that they disadvantage L-type students. Unfortunately, these L characteristics are disproportionately found in low SES students, and it is this that goes a long
way in explaining the achievement gap.
Conclusions and implications
Given the growing importance of human capital on social mobility and given the
persistence of social immobility in rich countries, understanding the source of
achievement gaps in countries is an important area of inquiry. A comparison across
the countries is long overdue, especially given the differences between the countries
that this paper highlights.
This paper has investigated the source of achievement gaps in the 19 countries by
borrowing the Oaxaca decomposition to partition the gap into that due to school characteristics, and that due to student characteristics. The study used HLM, and also
confirmed the robustness of the estimates through analyses of two subsets of students
with more homogenous characteristics to address problems of omitted variable bias.
While one can never be sure that omitted variables bias does not explain the estimates
reported in this paper, these two additional analyses provide us with greater
confidence that our initial estimates capture the extent to which gaps in countries are
due to student versus school characteristics.
One main finding is that in all countries, unmeasured characteristics associated
with students explain a significant portion of the achievement gap – on average the
506
K. Baird
achievement gap is about cut in half when one limits the students to those who score
above what their measurable characteristics would predict. By the nature of being
unmeasured, we cannot say what these unmeasured characteristics are, or what they
are related to. However, our findings are consistent with these characteristics being
related to unmeasured student characteristics rather than unmeasured features of the
schools they attend. Thus, our findings imply a large share of the achievement gap can
be attributed to characteristics of low SES students rather than to differences in school
resources available to low versus high SES students.
This finding is particularly true in Taiwan, Ontario, Indiana, the USA, Belgium,
Korea, Basque Country, Norway, Sweden, New Zealand, Australia, Japan, and Italy:
differences in measured school characteristics between low and high SES students
play a little role in explaining these countries’ large achievement gaps. Rather, in these
countries the results suggest that achievement gaps can foremost be traced to disadvantageous student characteristics which are disproportionately found in low SES
students. While we don’t know what these are, in the paper we hypothesize that this
might represent the hard to measure but important characteristic of student motivation,
interest, and effort expended in school. Indeed in these countries, there is much less
difference in the (measured) characteristics of schools attended by low SES students
than by high SES students when compared with countries where we find that resource
differences matter.13
One should not conclude, of course, that in these countries little can be done to
reduce the achievement gaps; but it does suggest that policies to address the achievement gap need to look beyond the school for solutions to the achievement gap. It could
be that in these countries, the educational environment of many low SES students
outside of the school is particularly unsuited to human capital accumulation (thus
accounting for the disadvantageous unmeasureables associated with low SES
students). It could also be that the achievement gap in these countries can be traced to
a school system that succeeds in providing equal opportunity for students as long as
students bring certain characteristics with them; and students without these characteristics are disproportionately found among low SES students. In high scoring countries,
we have suggested that it may be due to the ability and practice of wealthier families
to purchase the services available from ‘cram schools’ needed for their children to
keep up with a demanding school system. In lower scoring countries, low SES
students may be more prone to boredom and disinterest. To be clear, though, we are
unsure what might explain these patterns in the lower scoring countries. However in
all six of the low-scoring countries where we estimate that resource differences do
explain the achievement gap, we estimate that providing low SES students with the
same resources as high SES students receive still results in average test scores among
this group of less than 500 points. In some countries, then, the belief that resources
affect student performance and help explain why low SES underperform high SES is
not supported by this study.
A second main finding is that in some countries (Singapore, England, Scotland,
Hong Kong, Quebec, and the Netherlands) measured school characteristics are much
more important in explaining the achievement gap. In these countries, both H- and
L-type low SES students (in our interpretation, those who are motivated and try hard
versus those who don’t) are primarily and somewhat equally disadvantaged by an
educational system offering a less advantageous set of educational resources. In these
six countries, we estimate that were these students sent to schools that ‘looked like’
those attended by high SES students, the achievement gap would be about one-third
Education Economics
507
to three-quarters smaller. In these countries, then, unlike in the first set, we reach the
more straightforward conclusion that reducing the achievement gap means first of all
improving the quality of the schools that low SES students attend.
How do we reconcile the two different patterns we find? In this respect, this study
may raise more questions than it answers. But we offer two tentative speculations.
First, Wöβmann and Fuchs (2004) highlighted the importance of differences in institutional setting in explaining test-score differences between countries. Many wealthier
countries appear to be on the ‘flat’ part of the production function, where additional
resources (absent institutional change) do not really make a difference in terms of
student performance. Our findings suggest the same might be true with respect to
performance differences within countries: allocating more resources to low SES
students will not make much of a difference, absent institutional change or absent
changes in non-school related policies that affect low SES students and their families.
Second, reconciling these different patterns give us some reason to pause over our
conclusion that indeed ‘Resources Matter’ in some countries. As mentioned above, in
the six ‘resources matter’ countries, low SES students typically attend less wellendowed schools than do their high SES counterparts. In the ‘resources don’t matter’
countries, differences in school characteristics between low and high SES students are
much smaller. Yet there is no real difference in the average achievement gap between
these two sets of countries. This leads us to question whether the true ‘cause’ of the
achievement gap in ‘resource matters’ countries is really the schools, or whether
providing more resources to low SES students in these countries would result in the
country switching from the ‘resources matter’ category to the ‘resources don’t matter’
group. That so many countries are in this latter category provide some reason to be
skeptical of the potential for ‘better schools’ – at least better in their measured
characteristics – to reduce the achievement gap.
In the end, these speculations indicate the importance of cross national studies
examining institutional features of countries and their relationship to explaining the
relative performance of countries’ low SES students. Our study finds that despite
relatively high levels of SES-based differences in ‘school quality’, low SES students
in Quebec and Hong Kong, for example, do nearly as well as their high SES counterparts – and perform better than high SES students do in many other countries as well.
Addressing the human capital side of social mobility is a challenging and complicated
undertaking, but is clearly an area deserving of more detailed country-level analyses;
and Quebec and Hong Kong may be the right place to start.
Acknowledgements
I would like to thank the US Department of Education’s National Center for Education Statistics for training support, Sara de la Rica, Mark Long and Ricardo Mora for helpful suggestions,
two anonymous reviewers, and participants in a seminar at the Departamento de Fundamentos
del Análisis Económico II, Universidad del País Vasco, and at the University of Washington’s
Center for the Study of Demography and Ecology, for useful comments. A draft of this paper
was presented at the International Association for the Evaluation of Educational Achievement
(IEA) Third Annual Research Conference, Taipei, Taiwan, September 2008.
Notes
1. The Economist. 15 December 2007. Chile: Playground harmony, p. 54.
2. See http://web.worldbank.org/WBSITE/EXTERNAL/DATASTATISTICS/.
508
K. Baird
3. The regions are Indiana, the Basque Country, Flemish Belgium, Ontario, Quebec, and
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
Hong Kong. The countries in the study are Australia, Italy, Japan, Korea, Netherlands, New
Zealand, Norway, Singapore, Sweden, England, Scotland, Taiwan, and the USA.
Further on we address why we do not pool the data.
In a few instances, the unmeasured characteristics are negative (the model overestimates
the test-score gap). In these cases, we leave PU out of the denominator, thus estimating
the percent of the total test score gap as: Equation (2) (PS)/(PS+PB). As in Equation (3), the
purpose of this is to bias downward our estimate of the importance of school characteristics.
This approach is similar to a fixed effect approach, but assumes some underlying
distribution to these school level effects. As discussed below, in this study, we substitute
the classroom level for the school level.
Using the average of the five plausible values does lead to a greater likelihood that the
explanatory variables will be found to be significantly related to test scores; however it also
generates unbiased estimates of the size of this relationship (the coefficient). For this paper,
it is the size and not the statistical significance of the variables that we are most interested
in, since the focus of the study is on the cumulative effect, rather than the effect of individual variables. Moreover, a comparison of HLM results using all five plausible values with
the average value found no real difference in the coefficients or their significance.
Only 1.5% of students in this study did not answer the question concerning number of
books in the home, whereas over 30% either did not answer or answered ‘I don’t know’ to
questions concerning their parents’ educational attainment. Moreover, analyses indicate
that these 30% missing responses are strongly correlated with ‘books in the home’,
suggesting a strong non-random component to who did and did not answer questions about
parent’s educational level. The overall correlation between books in the household and
parents’ educational attainment is 0.36, and the correlation between these alternative ways
of measuring the achievement gap in this study is 0.7.
As mentioned earlier, where on average the model underestimates the gap, the remaining
unexplained gap we attribute to unobserved student characteristics, thereby giving a lower
bound on the estimated importance of school characteristics.
The model used in this and the next section is identical to that used above, with two
differences. First, estimates are derived based on OLS rather than HLM. This is because
limiting the analysis to low SES student results in too few observations per classroom to
use HLM. The second difference is that the dependent variable is the average of the five
plausible values, in contrast with HLM which performs five separate regressions and
reports the average estimated value and standard deviation of the coefficients. While using
the average value does lead to a lower standard error, the estimated βs are unbiased, which
is all that is used in estimating the counterfactuals in this study.
Wöβmann (2004), for instance, found that school characteristics are of greater importance
for high scoring students.
In this manner, a student will of course be labeled as an H student if it is the unmeasured
characteristics of the student’s school, rather than the students’ unmeasured individual or
background characteristics, that leads to a higher test score. If true, then the H captures a
more highly effective school than a more able or motivated or otherwise advantaged
student. While we cannot identify the reason for the students’ higher-than-predicted performance, we believe that selecting for students in this way does reduce the heterogeneity
among the students, and thus reduces the potential problem of self selection and omitted
variables bias in this study, which is the objective of this additional estimation.
Evidence available from author.
References
Arroyo, C. 2008. The funding gap. Washington, DC: Educational Trust Fund.
Baker, D. 2003. Should America be more like them? Cross-national high school achievement
and US policy. Brookings Papers on Education Policy. Washington, DC: The Brookings
Institution.
Baker, D., M. Akiba, G.K. LeTendre, and A.W. Wiseman. 2001. Worldwide shadow
education: Outside school learning, instructional quality of school, and cross-national
mathematics achievement. Educational Evaluation and Policy Analysis 23, no. 1: 1–17.
Education Economics
509
Beaton, A., I. Mullis, M.O. Martin, E.J. Gonzalez, D.L. Kelly, and T.A. Smith. 1996. Mathematics achievement in the middle school years: IEA’s TIMSS. Chestnut Hill, MA: Boston
College. http://timss.bc.edu/timss1995i/TIMSSPDF/BMathAll.pdf
Beller, E., and M. Hout. 2006. Intergenerational social mobility: The United States in comparative perspective. The Future of Children 16, no. 2: 19–36.
Betts, J., A. Zau, and L. Rice. 2003. Determinants of student achievement: New evidence from
San Diego. San Francisco: Public Policy Institute of California.
Cavanagh, S. 2007. Poverty’s effect on US scores greater than for other nations. Education
Week Online, December 7.
Cogan, L., W. Schmidt, and D. Wiley. 2001. Who takes what math and in which track?
Education Evaluation and Policy Analysis 23, no. 4: 323–41.
Doyle, A. 2008. Educational performance or educational inequality: What can we learn from
PISA about France and England? Compare 38, no. 2: 205–17.
Education Policy Center. 2007. Baltimore city’s high school reform initiative: Schools,
students and outcomes. Washington, DC: The Urban Institute. http://www.urban.org/
UploadedPDF/411590_baltimoreschools.pdf
Goe, L. 2007. The link between teacher quality and student outcomes. Washington, DC:
National Comprehensive center for Teacher Quality.
Goldhaber, D., and D. Brewer. 2000. Does teacher certification matter? Educational Evaluation and Policy Analysis 22, no. 2: 129–45.
Hanushek, E., and D. Kimko. 2000. Schooling, labor-force quality, and the growth of nations.
American Economic Review: Papers and Proceedings 90, no. 5: 1184–208.
Hanushek, E., and J. Luque. 2003. Efficiency and equity in schools around the world.
Economics of Education Review 22: 481–502.
Hanushek, E., and L. Wöβmann. 2007. The role of education quality for economic growth.
World Bank Policy Research Working Paper No. 4122. Washington, DC: World Bank.
http://ssrn.com/abstract=960379.
International Association for the Evaluation of Educational Achievement. 2003. TIMSS 2003
international database. http://timss.bc.edu (accessed June, 2001).
Jürges, H., and K. Schneider. 2004. International differences in student achievement. German
Economic Review 5, no. 3: 357–80.
Lee, J. 1999. Missing links in international educational studies: Can we compare the US
with East Asian countries? International Electronic Journal for Leadership in Learning 3,
no. 18.
Mullis, I., M. Martin, E.J. Gonzalez, K.D. Gregory, R.A. Garden, K.M. O’Connor, S.J.
Chrostowski, and T.A. Smith. 2000. TIMSS 1999 international mathematics report. Chestnut
Hill, MA: Boston College. http://timss.bc.edu/timss1999i/pdf/T99i_Math_All.pdf
Schütz, G., H. Ursprung, and L. Wöβmann. 2005. Education policy and equality of opportunity. CESifo Working Paper 1518. Munich: Ifo Institute for Economic Research.
Solon, G. 2002. Cross-country differences in intergenerational earnings mobility. Journal of
Economic Perspectives 16, no. 3: 59–66.
Wöβmann, L. 2003. Schooling resources, educational institutions and student performance:
The international evidence. Oxford Bulletin of Economics and Statistics 65, no. 2: 117–70.
Wöβmann, L. 2004. How equal are educational opportunities? Family background and
student achievement in Europe and the United States. CESifo Working Paper 1162.
Munich: Institute for Economic Research
Wöβmann, L. 2005a. Educational production in Europe. Economic Policy 20, no. 43: 447–504.
Wöβmann, L. 2005b. Educational production in East Asia: The impact of family background
and schooling policies on student performance. German Economic Review 6, no. 3: 331–53.
Wöβmann, L., and T. Fuchs. 2004. What accounts for international differences in student
performance? A re-examination using PISA data. CESifo Working Paper #1235. Munich:
Institute for Economic Research.
Copyright of Education Economics is the property of Routledge and its content may not be copied or emailed to
multiple sites or posted to a listserv without the copyright holder's express written permission. However, users
may print, download, or email articles for individual use.
View publication stats
Download