Chaos, Complexity and the study of Education Communities

advertisement
Chaos, Complexity and the study of Education Communities
Rod Cunningham
Torfaen County Borough Council
Paper presented to the British Educational Research Association Annual
Conference, University of Leeds, 13-15 September 2001
Abstract
Theories of chaos and complexity have achieved some success in advancing the understanding of nonlinear systems in the physical world. The three principal conditions for a chaotic system are; that it operates
in a non-linear way, that it is iterative (the output of one cycle becomes the input of the next) and that small
variations in initial conditions lead to large differences in outcomes. Many systems within education appear
to meet these conditions. This paper explores the possible usefulness of chaos and complexity in an
education context. It is hypothesised that some important events at pupil, class and school level may be
understood within a chaos and or complexity perspective. For example, cognitive dissonance at pupil level
and the school community dealing with an adverse inspection report at school level. It is further
hypothesised that chaos theory and complexity may provide an alternative to the reductionist approach of
some school effectiveness work on the one hand and the localism of qualitative case studies on the other.
Complexity theory may provide a tool for tracing the emergence of simple organising principles from the
complexity of social interaction and have implications for the study of schools and their communities.
Approaches to the study of Education Communities
School Effectiveness Research is now a well-established discipline. The work is not
without its critics. It is not the aim of this paper to engage in this debate. The paper does
argue, however, that there are limits to the School Effectiveness Research [SER]
paradigm due to the fact that insufficient account is taken of the dynamic nature of
educational establishments. This is not to deny the importance of the work done within
SER over the past 30 years but rather to point out that the underlying assumptions
contained in much of this work may place limits on the understanding that it achieves. I
will argue further that the methodological sophistication recently achieved within SER
now makes its limitations apparent. It may be useful to consider a parallel with the
development of Newtonian science in the understanding of the physical world. Such
science is limited to the explanation of linear behaviour. Newton’s laws of motion
satisfactorily describe a vast range of everyday events, however, the laws are of no use in
explaining turbulent flow, for example. I will argue that there may be many occasions
when positivist and/or reductionist approaches to the study of education communities
reach a similar impasse.
Reynolds et. al. [2000] identify three major strands of School Effectiveness Research:
School Effects Research – studies of the scientific properties of school effects evolving
from input-output studies to current research utilizing multilevel models;
Effective Schools Research – research concerned with the processes of effective
1
schooling, evolving from case studies of outlier schools through to contemporary studies
merging qualitative and quantitative methods in the simultaneous study of classrooms
and schools;
School Improvement Research – examining the processes whereby schools can be
changed, utilizing increasingly sophisticated ‘multiple lever’ models. [Reynolds et. al.
2000, page 3]
The third category in Reynolds et. al’s typology may contain some work which captures
the dynamics of educational institutions. Much of the work in the other two, however,
espouses reductionist assumptions and methodology. School effectiveness researchers
realise the limitations of their paradigm, as will be discussed in the next section.
Goldstein demonstrates the levels of uncertainty contained in educational measurements
and argues that making fine distinctions between schools’ performance is untenable. I
will argue that these limitations are more than simply methodological and that they are
inherent in assumptions about linearity and stability within schools. A discussion of these
issues will act as a starting point from which to explore the use of ideas from the science
of complexity in education research.
Problems with league tables in education which reveal the limitations of the SER
paradigm.
Goldstein [1997] describes three important types of assessment in education. These are
formative assessment, summative assessment and the use of data to evaluate the
performance of an educational system. He points out that attempts are made to carry out
the three different functions connected with these three types of assessment from a
common set of pupil attainment data and that there are a number of problems with this
approach. In particular, the league tables which are drawn up ostensibly to compare the
performance of schools have very limited validity for six reasons.






The prior attainment of pupils is not taken into account and this is a major factor
in pupil attainment at a later stage.
Schools are differentially effective in different subjects and with pupils of
different ability, which is not reflected in a single figure.
The statistical uncertainty of data is large making it very difficult to distinguish
between the majority of schools in the table.
Schools change over time, however, the attainment data used reflects only one
cohort and is essentially historical data.
Student mobility between schools is not reflected in the tables.
Social factors, sex of students, ethnic origin and social background are not taken
into account. These factors are out of the school’s control.
Measures to overcome these problems include attempts at multilevel analysis, which
reflect the hierarchical nature of the data and longitudinal studies, which involve
measures of individual attainment at different times. Such measurements, claims
Professor Goldstein, allow pupil progress to be ascertained and therefore provide a fairer
comparison. However he points out that statistical uncertainty is still too large to allow
2
fine discrimination between institutions. The conclusion that Goldstein reaches is that
longitudinal, multilevel measures, possibly the most sophisticated statistical measures of
pupil progress available, are useful within schools as one tool for measuring
effectiveness. Such techniques, however, do not support the rank order of schools as
presented in league tables. There is a mismatch between what policymakers expect of
summative tests and what they can actually deliver.
Indeed the problems identified above are not the only concerns about the use of such
techniques for evaluating and comparing school effectiveness. Further questions arise
when the assumptions upon which such programs are based are examined. I suggest that
two related assumptions are problematic, that of linearity and of relative stability. In
order to take the difference in attainment of pupils at two different points in time as a
measure of progress, one must assume that learning is a linear activity (quite apart from
concerns about what tests actually measure). The aggregation of such pupil-level data and
the drawing of conclusions about school effectiveness then, assumes that schools
‘progress’ linearly. The question is posed, do schools, teachers, pupils learn and develop
in a linear way or do they, at least under certain circumstances, make significant positive
and negative gains? Related to this is the question about stability. Is pupils’ learning a
relatively stable process, and are schools inherently stable institutions? If the answer to
these questions is “No”, then evaluations of effectiveness based on statistical models,
albeit multilevel and longitudinal, may be severely limited.
Peter Tymms [1996] claims that using prior attainment data and examination or test
results allows around half of the variation between pupil results to be explained. This
claim, if true, suggests either that pupil learning is not always a linear processes or, that
all relevant factors have not yet been identified. The search for important factors
associated with high rates of pupil progress has occupied a number of researchers within
the school effectiveness tradition over the past twenty years. Most of this work is
essentially reductionist in that it attempts to identify factors using statistical techniques of
averaging, correlation and regression. The process of identifying individual
characteristics in this way rests on the assumption that the processes of change within
schools is linear and that schools are relatively stable institutions. I will turn now to a
discussion of attempts to identify factors in school effectiveness.
Reductionism and key factors in effectiveness
[Sammons, Hillman and Mortimore, 1995] in ‘The Key Characteristics of Effective
Schools’ attempt to identify the ‘correlates of school effectiveness’. This report concludes
a wide-ranging review of school effectiveness literature designed to distil out the ‘key
determinants’ of school effectiveness in secondary and primary schools. The authors
themselves point to the tentative nature of these key characteristics. They point out that
correlation does not establish causality and that transferability of results from one set of
schools to another is problematic. These key determining features are not to be seen as a
3
blueprint for success in the educational field but rather as areas to be considered by
schools in the process of self-evaluation. Multilevel modelling allows measures of school
effectiveness to be based on the academic progress (or at least improvement in test
results) being made by individual pupils in the school. A good case could be made, I
believe, for arguing that school effectiveness research has reached an advanced level of
refinement within this paradigm. There is also a good case, however, for arguing the need
to reflect on what these key characteristics tell us about schools and consequently, how
this information can be used for school improvement. If assumptions about linearity and
stability do not always hold, then the usefulness of a list of key characteristics may be
limited. Assumptions implicit in the approach taken by Sammons et. al. can by revealed
by studying a quotation they refer to in the ‘Key Characteristics’ work.
Sammons et. al. quote Chubb [1988] who says that school performance is unlikely to be
improved by any set of measures that;
Fails to recognise that schools are institutions, complex
organisations composed of interdependent parts, governed by
well established rules and norms of behaviour, and adapted for
stability [Chubb, 1988].
Chubb acknowledges the possibility of interdependence but fails to note that the rules and
norms may not be those of linear systems or that such institutions are not always
‘adaptated for stability’.
It can be argued that the work reviewed by Sammons et. al is largely reductionist,
assuming that features or characteristics can be distilled out or isolated by factor analysis.
My problem with this is not in the search for regularity or pattern within school data but
that the independence of the characteristics is assumed. Clearly many of the
characteristics influence each other (for example purposeful teaching and monitoring
progress). While there is no suggestion from the authors that schools can or do simply
pick the characteristics that they want from the list of eleven, the question arises about
what it means to express these characteristics in a list. It is just as plausible that all
effective schools have subsets of the eleven characteristics. Having sets of the
characteristics together, however, may be an entirely different matter. An example of the
shortcomings of treating factors as independent within educational research is provided
by Riley [1999, p8], who used factor analysis in this way to identify effective LEAs. In
her statistical work, Riley identified five key features. When taken individually these five
predicted about 35% of variation between LEAS. Taken in combination, however, they
predicted well over a half. To what extent can these factors then be thought of as
separate? There may only be a statistical sense in which the five factors can be thought of
as independent.
Linking school effectiveness to a measure in the difference in attainment of pupils at two
points in time assumes a linear view of learning and of cognitive development. This
assumption is inherent in a large number of school effectiveness studies as studied by
Sammons et. al.[1995] Subsequent work by . Sammons, Thomas and Mortimore [1997]
4
point out that departments within secondary schools are differentially effective and that
effectiveness of such departments varies over time. In fact, Sammons et. al. [1995] are at
pains to point out that ‘failing schools’ are not simply the antithesis of effective schools
but may have quite different dynamics. This contradicts the assumption of linearity
contained in much of the work they review.
The assumptions inherent in factor analysis are well summed up by B. Richmond, the
main author of the dynamic modelling software, STELLA. Richmond et.al.[1987]
suggest that reductionist approaches to explaining phenomena (either physical or social),
rest on three contestable assumptions.
Assumption 1: that an effect may be explained by a list of causes and that these causes
may be prioritised according to magnitude of effect. The causality described is one-way.
Assumption 2: that causes are external to the particular phenomena or the system under
scrutiny.
Assumption 3: that the causes are relatively independent of each other.
The alternative view, built in as assumptions to STELLA, is that external forces are more
likely to act as a catalyst to change within a system and that cause and effect usually
operates in a feedback loop. Causality is circular, the features of the system are
interlinked and vary in magnitude. The system then fluctuates dynamically. An important
consequence of this dynamic view is that a major part of the explanation for the
behaviour of a system lies within the system itself. Outside forces precipitate events and
modes of behaviour which are latent within the system.
Richmond et. al.[1987] discuss the implications of assuming that causes operate in a loop.
They argue that there are two kinds of feedback in operation, negative and positive.
Negative feedback seeks equilibrium and is a common occurrence when a system
experiences small ‘shocks’. Positive feedback is connected to periods of larger, more
fundamental change and growth. The two mechanisms complement each other, the one
maintaining stability and the other adaptability and development. Richmond et. al. argue
for the necessity of such mechanisms on the grounds that they increase the viability of a
system. Systems, they maintain, that do not operate in this way either stagnate or blowup. The authors note that the notion of ‘goal seeking behaviour’ can be applied to all
systems exhibiting negative and positive feedback or both. The ‘goals’ of inanimate
systems may be the constraints of natural laws or forces. In the case of animate systems
there is the further possibility that movement towards a state of equilibrium may be
deferred. This is a very important point which needs further discussion when
contemplating the use of dynamic modelling in social systems.
Alternatives to the factor model
Most of the studies in the section above employ a statistically orientated approach to the
study of social systems. Not all researchers would agree that this is appropriate. Many
5
who are uncomfortable with ‘quantitative’ approaches argue that social events can only
be understood in terms of the meanings for the actors themselves. This essentially
precludes generalisation since each locality has its own context and meaning is contextspecific. It seems to me that a major problem for this approach is that it contains an
inherent contradiction. For any set of events to have meaning to those outside of the
locality (presumably those reading accounts from the outside are expected to find them
meaningful), there must be some overarching commonality. The situation is often
presented as quantitative versus qualitative. Either accept a reductionist approach where
statistical analysis extracts simple law-like regularities or accept an in-situ description
which remains subjective and local. Attempts to integrate the two by arguing perhaps that
statistical techniques are appropriate at a macro level and case-studies for a micro
understanding do not really escape the criticisms of either. What may be appropriate is to
investigate some further techniques which are starting to have a major impact in physical
and biological sciences. These are appropriate in areas where assumptions about linearity
cannot be made. There is no guarantee that complexity theory has direct relevance for
studies of education communities. In the sections ahead I hope to explore the possibility
that it does. The first step will be to acknowledge the epistemological and ontological
assumptions of complexity theory and to contrast these with what has been broadly
termed reductionist and subjectivist positions above
Realism and Complexity
Byrne [1998], points out that the ontological and epistemological assumptions
underpinning the philosophical position of realism appear to fit or resonate with a
complexity approach. In particular, the realism of Bhaskar. According to Blakie [1993],
realism assumes that a reality exists apart from observation and scientific theory but that
this reality may not be immediately observable. A typical realist program will involve the
search for underlying, generative mechanisms. The aim of realist science is not to reduce
events or organisms to their constituent parts, indeed reductionism may not be
appropriate and may not lead to understanding. Human behaviour, according to realists,
is not reducible to biochemical reactions. Society is produced and reproduced by its
members. The social context conditions, but is also developed and changed by human
actions. Motives play an important part in realist explanation, as does the context within
which events take place. The meaning of actions is a social product but motives are
personal.
Realist science leans more towards explanation than prediction. The collection of data
and subsequent statistical analysis allows the exploration of trends and connections,
which can attain the level of theories. Laws and theories are not patterns of eternal truth
but regularities, which prevail, in a chosen context. The important next step is to attempt
to explain these regularities often by recourse to a model of the situation. Realism relies
heavily on the use of models, simulations and analogies since underlying mechanisms are
often not directly observable. The model is not seen as a replica of a real-life situation but
mirrors some important aspects of it. Explanation often comprises of parallels drawn
between a series of events and the model, taking account of the motives of the actors
involved. Explanation and understanding is appropriate at different levels and often it is
6
inappropriate to look for understanding by breaking events down into constituent parts.
Realism attempts to avoid subjectivism on the one hand and reductionism on the other. It
assumes an underlying determinism, although outcomes are not predetermined. This view
accommodates the idea of human agency within a rational framework. To some extent
Bhaskar’s realism rides roughshod over the divisions between subjectivism and
reductionism by putting faith in models as explanation. Critics of realism point out the
precarious nature of this ontology and rightly ask what justification there is for assuming
underlying mechanisms. Realists defend their position by appealing to pragmatism. The
test for the truth of realist theories depends on whether they ‘work’ in the sense of
providing good explanations.
Realists see social science as non-neutral, entailing value judgements and consisting of
practical interventions in social life. Exploratory statistical techniques are likely to
provide a useful way of identifying trends and patterns, which provide a basis for model
construction. Byrne[1998] highlights the use of cluster analysis, for example, as one such
technique. There are clearly many philosophical criticisms, which can be levelled at
realism, which it is not within the remit of this paper to explore. Given that these
assumptions have been made explicit it will be important to judge the value of
complexity approaches to work within education.
Chaos and Complexity
‘Chaos and complexity’ are terms, which encompass a range of interconnected ideas and
observable events. There is not yet a theoretical framework, which is well defined, under
the umbrella of this area of study. This section will attempt to describe the main features
of chaos and complexity in non-mathematical language. The descriptions will involve
physical examples since later discussion will reflect on the possible usefulness in social
and educational settings. I will refer to ‘complexity’ rather than ‘chaos and complexity’
as a shortened version of the title.
The study of complexity is essentially about the study of open systems, which behave in
particular ways. Open systems are those which interact with their surroundings and in
which there is likely to be an interchange of energy. Examples of open systems include a
heating liquid, a magnetic pendulum and the solar system. Closed systems, on the other
hand, are self-contained, such as a single pendulum. The motion of a single pendulum is
well defined, predictable and linear. This is because, after the initial push to set in
motion, all the forces damp its motion. The system has no way of generating energy
within or absorbing energy from external sources. Complexity is concerned with systems,
which are non-linear, that is, instead of damping or negative feedback, reinforcement can
occur. An arresting example of such positive feedback occurs when a microphone is
placed near a speaker in a public address system. In this example the feedback leads to
wildly uncontrolled noise.
Complexity theorists are interested particularly in systems, which operate on the 'edge of
chaos'. These are characterised by a fluid structure, which is sensitive to changes. Such
7
edge-of-chaos systems are referred to as ‘complex adaptive systems’, or as exhibiting
‘self-organised criticality’. The words ‘adaptive’ and ‘self-organising’ highlight the fact
that organising rules, which govern the behaviour of these systems, are local and often
simple, and that they can readily adapt to change. Another way to characterise this
adaptability is to say that information flows readily throughout these systems. A
computer simulation of a flock of birds exhibits an example of complex adaptive
behaviour, as described in Waldrop [1992]. Craig Reynolds called the individuals in his
computerised flock ‘boids’. Each boid was programmed with three simple and local
rules:
each boid flew at the same velocity as those around it ( as far as possible)
each boid tended to move towards the centre of gravity of the flock.
Each boid kept as close as possible to other boids.
The resulting behaviour of this flock on screen proved to be remarkably similar to the
real thing. Boids turn together and flow round objects in a similar way to flocks of real
birds. There are two further important issues, which need to be highlighted here. First,
that the behaviour of the flock can not be predicted from the initial rules. The flock
behaviour can be said to be ‘emergent’. Secondly, that it is typical of complexity
approaches that computer simulations are often used to demonstrate or explain this
emergent behaviour. In the case of real birds the three rules make good survival sense,
particularly if predators are near-by. Although the connections between the computer
model and the real behaviour are circumstantial the demonstration along with reasonable
explanation would be regarded as strong evidence that some common mechanism was
operating, or at least that an analogy can be drawn between the mechanisms in each case.
A second example of this type is of an ants’ nest, given by Hofstadter [1985]. Individual
ants operate according to simple. Local rules, much like the boids. The resultant
behaviour of the ant colony gives the impression of an over-arching ‘intelligence’ which
emerges from the activity of individual ants. The colony can fulfill its needs and respond
to emergencies. It is complex, since individual ant movements cannot be predicted,
adaptive, reacting to the wider environment, and relatively robust, given that it will
persevere even under extreme conditions.
Non-linear systems are deterministic in the sense that causes and motives prevail. They
are not, however, determined. The sensitivity and criticality of initial conditions and the
fact that the resultant information at one moment then feeds back to influence and change
the next state means that the system is not fully predictable. An example of this is the
magnetic pendulum. The motion of this object is not random, however, it cannot be
defined mathematically in advance. The resultant behaviour tends to fall within a pattern.
This is referred to as the ‘strange attractor’ of the system: Attractor, because behaviour
appears to be bound within a set of states and strange because the system may jump
between these states after being given the smallest of nudges. Strange attractors are
visible when a ‘phase diagram’ is constructed of all the possible states that a system
could take. For complex systems of interest the actual states that the system takes will
form a pattern on the phase diagram. The rings of Saturn are made up of asteriods, which
can maintain only distinct distances from the planet due to the gravitational attraction
from other parts of the solar system. The rings form a visible strange attractor.
8
A further example of indeterminant outcomes and deterministic laws can be seen in some
mathematical equations (an example is given in the appendix ). This equation involves
feedback or ‘iteration’. The next value in the sequence depends on a calculation involving
the previous term. Sometimes the equation will behave in quite a predictable way
whatever the starting values. Experimenting with different values for the parameter p,
however shows that at other times the equation will act erratically. Changing the
parameter in this way can be likened to increasing the heat underneath a shallow pan of
oily liquid. When this is done the liquid at first conducts the heat without moving, then it
begins to move with a rolling motion. Increase the heat still more and the erratic
movement of liquid gives way to a layer of hexagonal convection cells with hot liquid
rising up the sides and cold down the middle of the cells. Further heating leads to more
erratic behaviour. Some physical systems when driven by increasing heat, water flow or
whatever exhibit bifurcation. The system develops consistently for a while and then
suddenly splits in two to take either a higher or lower value. Each of these arms then
proceeds regularly but then bifurcation occurs again in each of the arms. The time to
successive bifurcations becomes increasingly shorter by a constant factor.
Underlying complexity theory is the notion that systems are hierarchical and that higher
levels may be more than the sum their lower level constituents. In the non-linear systems,
which interest complexity theorists, the parts interact in a way which cannot be reversed.
Light waves, for example, are linear. When light of different amplitude or frequency
merge a complicated additive product is formed. These original waves can, however, be
separated again. In a non-linear system, no separation is possible since the parts change
each other and create a new state. In non-linear systems the ‘arrow of time’ runs one way.
The implications for this are numerous. First, that a reductionist approach will often not
be appropriate and that explanation of a lower order phenomenon may be by reference to
the higher level. Furthermore, the higher level activity and organisation may ‘emerge’
from the lower constituents and may not be predictable by looking at the constituents.
Contrary to reductionism, therefore, a complexity approach may involve identifying
patterns at a macro level which change and develop within defined limits. The problem
for complexity theorists is to establish a firm footing with their material since, contrary to
reductionism, there are no building blocks identified at a lower level which are anchored
in experience.
The ideas and language of complexity have been used in a range of contexts from
weather systems, earthquakes, population studies to the behaviour of the stock market. A
system becomes interesting in terms of complexity theory when it is far from its
equilibrium point, in the region between rigidity and randomness, for example, at phase
transition points, such as the melting point of a liquid. Classical economics works on the
assumption of diminishing returns. As such it is similar to the simple pendulum. There
are times, however when positive feedback applies in economics. Brian Arthur [1990]
uses the example of economic ‘lock-ins’, for example when a particular technological
solution gains a slight advantage this can rapidly lead to an overwhelming lead since
purchasers do not want to buy a product which will not be supported in the future. The
9
QWERTY keyboard is often given as an example.
The booms and busts of world economies and the occurrence of earthquakes have both
attracted much interest from complexity theorists. Like earthquakes, incidents in the
economy can be mapped over time. Interestingly there appear to be a similar patterns
emerging. If the size of earthquakes and economic changes is quantified, then in both
cases a ten-times bigger event happens ten-times less often. This is not to say that the
actual timing of an event can be predicted. In fact there is nothing to stop large
catastrophes happening one after the other, and it may only take a small event to initiate a
large catastrophe. Over time, however, the frequency of large and small incidents follows
this ‘power law’ in a variety of contexts. Not all systems are non-linear and therefore not
all amenable to a complexity approach. Within our solar system, for example, the sun
contains more than 99% of the mass. The movement of planets around the sun is not
chaotic for this very reason. The gravitational pull of the sun overwhelms any
interplanetary attraction, damping down chaotic motion.
Complex systems at the edge of chaos are inherently evolutionary. Order emerges out of
chaos, stability is punctuated by rapid change. The ideas of complexity theory appear to
be well established in the physical sciences. The question arises about whether or not
these ideas have any relevance in the study of education communities. One further
example might help to convince the reader that the question should be pursued. The game
of chess involves around ten simple rules and is confined to the physical space of the
chess board. The system of chess is complex and adaptive and is extensively studied.
Statistical approaches, however, do not seem appropriate. It makes little sense to think of
determining all possible outcomes since these are huge in number and opponents react to
each other’s moves rather than following a ‘rational’ course. Game strategies and macrorules have emerged over hundreds of years and still the game has potential for innovation
and creativity. There is no attempt made to reduce strategies at one level to the game
rules at another. Could the study of education communities be treated in a similar way?
To this question I will now turn.
Complexity – A valid paradigm for education?
Byrne [1998] points out that many ideas from complexity theory ‘resonate’ with ideas
and experiences from the social sciences. What follows are some of the areas of work and
the findings from educational research which resonate for me in this way. Each of these
areas would merit a full section of its own to fully explore the possibilities. They are
presented here in the form of suggestions for future development.
Formative Assessment Feedback and Learning.
Black and Wiliam [1998] argue convincingly that formative assessment, that is
assessment where evidence is used to adapt the teaching materials and methods used, is
crucial to successful learning. They argue this having scrutinised several hundred studies
10
of pupils learning in different contexts. The theme of learning and feedback is not only
apparent at individual pupil level. Reference to the need to focus on the diagnosis and the
detail of learning is found throughout education literature; for example in the notion of
the Reflective Practitioner [Schon, 1983 ], The Intelligent School [MacGilchrist et. al.
1997] and in literature concerned with the Learning Society. In fact if there is one central
image which captures the essence of modern education it is that of the learning cycle.
Learning clearly fits the definition of an emergent phenomena as explained by Holland
[1999]. He claims that, in the process of learning:
1) There are underlying mechanisms generating enhanced understanding.
2) The whole is more than the sum of its parts.
3) Persistent patterns emerge.
4) The function of these patterns changes with context.
5) Higher level patterns can be built on lower level ones.
The above points will be amplified with a few examples. It is possible that traditional
models of learning where the mind was thought to be an empty vessel to be filled with
information espoused a linear approach. Any form of developmental or constructivist
view of learning implies a dynamic process. Denvir and Brown [1986] developed a
heirarchy of skills which children acquire along the road to becoming fluent at basic
mathematics. When tracking individual pupils through the process they found:
a) that the order of acquisition differed for different pupils,
b) that in post-test situations children sometimes were competent in skills they had not
been taught and often were not competent in those they had.
There are active brain-processes underlying the pupils’ learning. Lawler [1985]
demonstrated how mathematical knowledge develops within distinct domains and how
significant moments of enhanced understanding are achieved when domains are bridged.
The sum is more than its parts and the function of the learning is constrained by context.
DiSessa [1988] argues that intuitive physics often conflicts with the text-book version.
Non-physicists rely on a number of experiential fragments which he calls
phenomenological primitives. These, he argues, require no explanation but are simply
used without question. In order to enter the realm of scientific theory a more systematic
model building is required. This layering of patterns of understanding is exemplified in
Seymour Papert’s [1980] computer language, LOGO, designed for exploring and
developing mathematics.
School-level Examples
Within schools, there is ample evidence that successful teacher development depends on
extended time for reflection as in the action research model, Schon [1983], and that shortterm INSET is relatively ineffective, Askew [1997]. Teachers’ learning may also be a
dynamic process. Fullan [1991] identifies four main factors in the implementation of
lasting change in educational systems. These are:
Active initiation and participation.
Pressure and support.
Changes in behaviour and belief (where changes in behaviour may predate those in
11
belief)
The overriding problem of ownership.
A dynamic model of change is implied by the above factors. Louis and Miles [1991]
found evidence that having ‘effective coping strategies’ was the most important issue in
the success of change programs within urban high schools. This was closely linked to
access to immediate information and feedback. The quality of planning was not related to
the success of the programs. Scheerens and Creemers [1989] conclude that retro-active
rather than pro-active planning is more important, that is, that schools need to plan
generally but be flexible to plan in detail for immediate, changing circumstances.
Learning as Central to the Understanding of Education Communities
Within the school improvement literature there are examples of dynamic processes in
action, such as John MacBeath’s work with whole school communities. This work
strongly implies a complexity model with its focus on community member interaction
and emergent solutions. Joyce [1991] captured the notion of a holistic and antireductionist approach to school improvement. Much of the theoretical work, however,
returns to factor analysis, for example, Creemers[1994] provides a model containing a
range of several dozen correlates with school effectiveness linked by arrows showing
interconnection and influence. He then calls for large-scale studies to give greater
empirical support for these links. Some work in education leans towards a complexity
approach. For example, Byrne and Rogers[1996] compare social and educational
divisions using cluster analysis techniques. Tymms [1996] uses computer simulations to
capture the ‘ebb and flow’ of performance data. The new statistical techniques associated
with exploratory data analysis [EDA], are compatible with an approach which is
interested in dynamics and in detail rather than averages and long-term trends. There is a
need, I believe for exploration of the application of complexity techniques to research in
education.
Criticisms of Complexity
Sardar and Ravetz [1994] entitled an edition of Futures magazine, “Complexity: Fad or
Future?” There is concern expressed by some writers that ‘complexity and chaos’ refers
to a collection of ideas backed up by a few interesting looking computer graphics but
with no real independent basis. There is a real danger that the lure of computer graphics
will convince some researchers to find chaos where it is inappropriate, and introduce
notions of complexity where a more traditional explanation might be appropriate. There
is much work to be done to establish the use of these ideas in the social sciences. Many
arguments for complexity rest on analogy and simulation. The rigor of such approaches is
debatable. The solution to this problem is that conclusions drawn will have to be
tempered with extreme caution until (if indeed this is technically possible) a framework
for the validity and reliability of work in complexity is mapped out. How, for example,
does the boids computer program, or the study of ants nests relate to or assist our
understanding of groups of humans?
12
As mentioned earlier there are debates about the ontological and epistemological
assumptions underlying complexity theory. The notion of underlying mechanisms being
of particular concern. Some important questions are: When do linear and when non-linear
assumptions prevail? Are these assumptions the same in physical and in social science or
are we simply being sucked into a set of mathematical diversions? What more do we
understand about some phenomena from a complexity standpoint? In the final section I
will discuss a few of the implications as I see them for adopting a complexity approach.
Some Implications for Adopting a Complexity approach to the Study of Education
The most interesting area of work in education for complexity as I see it is round the idea
of learning and feedback. This is almost certainly a complex process. Learning taken at
different levels will end up covering large areas of interest within educational research.
Various models of learning are in existence but the question that complexity could assist
with is how levels of learning, pupil, teacher, organisation interlink. A complexity
approach would attempt to identify behaviours which emerge from the exercise of local
practices around teaching and learning. In terms of the whole school, do appropriate
management structures emerge out of such local practices?
Following on from the centrality of learning and feedback we might consider what types
of planning are effective. If the system is to be regarded as adaptive and flexible and
many of the solutions emergent then this will be reflected in the planning; perhaps longterm vision and short-term adaptable practice. How does one control a complex system
given that output can fluctuate wildly when it is far from equilibrium? What does a
strange attractor look like for a school? Byrne and Rogers [1996] attempt to answer this
in part with their school typology. If the mechanisms at work are recognised and if the
possible outcomes have been identified then strategies for moving towards more
desirable outcomes and away from less desirable may be possible. A chess grand master,
when asked about the secret of good play replied that it was to avoid making major
errors. Perhaps complexity shows the power of multiple interactions and that as human
agents we cannot be responsible for everything that happens. A particular school comes
to mind where, during an OFSTED inspection, a single negative remark was picked up by
the local paper. The concern generated in the local community was such that the intake of
pupils fell the next year leading to a staff redundancy, demoralisation and financial
problems. Such violent swings are not uncommon in education. Perhaps understanding
this would help people to cope.
A major problem for school improvement researchers is to explain how schools under
special measures can most effectively move forward. An understanding of mechanisms
using complexity approaches may suggest less rather than more planning and more focus
on learning and teaching. There are a number of interesting questions which wait to be
answered, such as: What makes complex systems effective? What is the optimal
performance of a complex system given variation across time and across its parts? How
the effectiveness of complex systems be evaluated? Can dynamic modelling software
such as STELLA provide ways of simulating the operation of schools and perhaps allow
us to avoid making decisions leading to unwanted outcomes? I do not envisage that a
13
complexity approach will conflict with the work already established in the fields of
school effectiveness and school improvement. Complexity may provide a framework for
connecting findings from these fields, for example linking our present understanding of
highly effective and ‘failing’ schools. To return to the analogy drawn at the beginning of
this paper, the Newtonian Laws of Motion can be compared with reductionist models of
school effectiveness. Both work satisfactorily in some situations but not in others.
Finally, and this is in an increasingly speculative vein, is our school system in a lock-in
state like that of the QWERTY keyboard discussed earlier. Are we locked into a way of
teaching and organising education which is prohibiting the evolutionary solutions to
emerge? Would an understanding of this fact help release the constraints, as it were? Our
universe can be considered as a linear system because the sun contains most of its mass
and therefore dampens any chaotic tendencies. In this case the ensuing stability is a
necessary condition for the development of life on earth. Most other organic systems,
however, survive because of their ability to adapt and change. I suggest that there are
interesting possibilities awaiting the application of complexity theories to the study of
education.
Rod Cunningham – September 2001
14
Appendix
The Logistic Equation
This is an iterative equation which describes how populations change.
Xn is the size of the population at time n
Xn+1 is the size at time n+1
P is a parameter which can be changed
The equation is:
Xn+1 = pXn (1 – Xn)
Start with p = 1 and Xn = .2
Then Xn+1 = .2 (1 - .2)
= .16
in the next step .16 feeds into the equation as Xn to plot the next value of Xn+1
and so on. The aim is to keep computing results until a stable solution for X is reached,
that is the value for X is maintained through repeated calculations.
If the starting value is kept constant at .2 and the parameter p varied between 1.5 and 3.9
an interesting set of results is produced.
For p = 1.5 a X converges to a single value.
For p = 3.0 two distinct values of X arise
For p = 3.5 , four values of X
The next step will be to eight values and so on. The time to the next doubling can be seen
to reduce. It does so by a constant factor called the Figenbaum number.
For p = 3.58 a scatter of results is obtained.
15
The above is analogus to the move from laminar to turbulent flow as speed of liquid flow
increases. The scatter of values which X takes is not random and has some pattern,
although this is not regular. Very small changes in parameter p have dramatic
consequences for the output of the equation.
References
Arthur, B. (1990). “Positive Feedbacks in the Economy.” Scientific American February:
80 - 85.
Askew, M. and et.al. (1997). “The Contribution of Professional Development to
Effectiveness in the Teaching of Numeracy.” Teacher Development 1(3): 335 - 355.
Black, P. and D. Wiliam (1998). “Assessment and Classroom Learning.” Assessment in
Education 5(1): pp 7 - 73.
Blakie, N. (1993). Approaches to Social Enquiry. Cambridge, Polity Press.
Byrne, D. (1998). Complexity Theory and the Social Sciences. London, Routledge.
Byrne, D. and T. Rogers (1996). “Divided Spaces - Divided schools: an Exploration of
the Spatial Relations of Social Division.” Sociological Research On line 1(2).
Chubb, J. E. (1988). “Why the Current wave of School Reform Will Fail.” Public Interest
90: 28 to 49.
Creemers, B. (1994). Towards a Theory of School Effectiveness. The Effective
Classroom. B. Creemers.
Denvir, B. and M. Brown (1986). “Understanding of Number Concepts inLow Attaining
7 - 9 Year Olds.” Educational Studies in Mathematics 17: 15 - 36.
diSessa, A. (1988). Knowledge in Pieces. Constructivism in the Computer Age. G.
Forman and P. Puffall. London, Laurence Erlbaum Associates.
Fullan, M. (1991). The New Meaning of Educational Change. London, Cassell.
Goldstein, H. (1997). “Methods in School Effectiveness Research.” School Effectiveness
and School Improvement 8: 369 to 395.
Gray, J., D. Reynolds, et al., Eds. (1996). Merging Traditions: The Future of Research on
School Effectiveness and School Improvement. School Development Series. London,
Cassell.
16
Hofstadter, D. R. (1985). Metamagical Themas: Questing for the Essence of Mind and
Pattern. Harmondsworth, Penguin Books.
Holland, J. H. (1998). Emergence, From Chaos to Order. Oxford, Oxford University
Press.
Joyce, B. R. (1991). “The Doors to School Improvement.” Educational Leadership 48(8):
59 - 62.
Lawler, R. (1985). Computer experience and Cognitive Development. Chichester, Ellis
Harwood.
Louis, K. S. and M. B. Miles (1991). “Managing Reform: Lessons from Urban High
Schools.” School Effectiveness and School Improvement 2(2): 75 - 96.
MacGilchrist, B., K. Myers, et al. (1997). The Intelligent School. London, Paul Chapman.
Papert, S. (1980). Mindstorms. Brighton, Harvester Press.
Reynolds, D. et. al. (2000) An Introduction To School Effectiveness in Teddlie, C. and D.
Reynolds (2000). The International Handbook of School Effectiveness Research.
London, Falmer Press.
Richmond, B., P. Vescuso, et al. (1987). An Academic User's Guide to Stella. Lyme,
New Hampshire, High Performance systems.
Riley, P. K., D. J. Docking, et al. (1999). Caught Between: The Changing Role and
Effectiveness of the Local Education Authority. British Education Research Asociation,
University of Sussex.
Sammons, P., J. Hillman, et al. (1995). Key Characteristics of Effective Schools: A
review of effectiveness research. London, Institute of Education.
Sammons, P., S. Thomas, et al. (1997). Forging Links: Effective Schools and Effective
Departments. London, Paul Chapman.
Sardar, Z. and I. Abrams (1999). Introducing Chaos. Cambridge, Icon Books.
Sardar, Z. and J. Ravetz (1994). “Complexity: Fad or Future?” Futures 26(6).
Scheerens, J. and B. Creemers (1989). Towards a More Comprehensive
Conceptualisation of School Effectiveness. School Effectiveness and School
Improvement. B. Creemers, T. Peters and D. Reynolds. Lisse, Swets and Zeitlinger.
Schon, D. H. (1983). The Reflective Practitioner: How Professionals Think in Action.
17
New York, Basic Books.
Teddlie, C. and D. Reynolds (2000). The International Handbook of School Effectiveness
Research. London, Falmer Press.
Tymms, P. (1996). Theories, Models and Simulations: School Effectiveness at an
Impasse. Merging Traditions. J. Gray, D. Reynolds, C. Fitz-Gibbon and D. Jesson.
London, Cassell.
Waldrop, M. M. (1992). Complexity: The Emerging Science at the Edge of Chaos.
London, Penguin.
Rod Cunningham
School Development Adviser
Torfaen County Borough Council
County Hall
Cwmbran
NP44 2WN
01633 648181
rod.cunningham@torfaen.gov.uk
18
Download