PDS-A2-Geography-Book

advertisement
A2 Geography Study Guide 2012/2013
GEOG 3: Contemporary Geographical Issues
C
hapter 1: Weather and Climate Associated Hazards
1. Introduction: Why are the British obsessed with the weather?
The nature of the climatic region that we inhabit as human populations plays a large part of determining
lifestyle. It affects the activities we can undertake, when we do them, the food we
eat, the clothes we wear, how we travel the sports we play and the activities we
chose to pass our leisure time, all of which in part allow us to define specific cultures.
The British are renowned for discussing the apparently ‗unpredictable‘ and
changeable nature of British weather that we witness on a daily basis. So much is our
obsession with the weather, it has become a British pastime that appears odd to the
rest of the World who Inhabit different climate zones. It is no surprise the reason
for this obsession, no other country in the world has such varied weather conditions,
with the same power to rule people‘s lives as we have. Therefore the real question
the rest of the world should ask is “Why is the British weather so unpredictable and changeable in
nature?” not why are we so obsessed with it! In this part of the module you will find out the answer to
this question and be enlightened (hopefully) by the other questions that it will allow you to answer!
Public awareness and interest in the climate has increased dramatically over the last 30 years as has our
ability to understand the atmosphere and predict weather patterns using computer models which allow
us to suggest future trends. However climate is complicated and chaotic in nature and its link to
weather phenomena is poorly understood at the moment, making long term predictions difficult. Issues
such as anthropogenic (man made) climate change may pose in the future to be one of the chief threats
to the well being of the planet and to society, therefore the issue deserves its newsworthy status and
relevance to day to day lives!
Key Terms:
‗Climate change should be seen as the greatest challenge to face
Weather: The state of the atmosphere at a
man and treated as a much bigger priority in the United Kingdom
Prince Charles‘
particular point at a specific time. Weather can
be described in terms of temperature,
precipitation, wind speed, wind direction, cloud
type, humidity and visibility.
Climate: the mean atmospheric conditions of
2. Major Climate Types
an area measured over a substantial period of
time. Different parts of the world have
recognisable climate characteristics with
distinctive seasonal patterns.
Climate is defined as an area‘s long term weather pattern. The link between climate and weather
conditions is complicated but good examples of long term weather phenomena observed in different
climates would be: Precipitation, temperature, hours of sunshine, wind speed etc… There are
numerous types:
Polar = Cold + Dry
Continental = Cold + Humid
Temperate = Mild + Humid
Dry or Arid = Hot Deserts
Tropical = Hot + Humid
Mountainous = Semi
Arid or Alpine: Cold
Winters, mild summers
2
Key Terms:
3. Major
Climatic
Controls
Atmosphere: the mixture of transparent gases that surround the earth and are held in
place by gravity. It contains mainly Nitrogen (78%) and Oxygen (21%) as well as other minor
gases such as carbon dioxide, methane, water vapour, argon and other traces.
Humidity: the amount of water vapour in the atmosphere which varies with latitude,
virtually zero at poles but can be over 5% at tropics. It is dependant mainly on temperature
the warmer the air the more water vapour it can hold. Absolute humidity is total amount of
water vapour in the atmosphere measured in g/m3. Relative Humidity is defined as the actual
vapour content compared to the amount that the air could hold at a given temperature or
pressure expressed as a percentage.
(a) The Structure of the Atmosphere
The atmosphere is a pocket of odourless and colourless gas held to the earth by gravitational attraction,
the general accepted limit of the atmosphere by convention is 1000Km. Most of the atmosphere and
therefore the climate it controls is set within the upper 16km of the atmosphere from the Earth‘s
surface at the equator and 8km at the poles. Roughly half of the atmospheres mass lies just 6km from
sea level and 99% within 40km from sea level. Atmospheric pressure decreases rapidly with altitude
(you will get to observe this phenomena when you see how much a bag of crisps expands by at the top of
Mt. Etna during your second year trip), remember that mountaineers find it very hard to get a hot cup
of tea because water boils at 72 oc on Everest. Weather balloons and have been used to work this out
for pressure but for temperature recent satellite imaging shows a more complex change with altitude.
The change with altitude is used to divide the earth up into four layers.
The Troposphere: The bottom most of these is the Troposphere, temperatures here decrease by about
6.4 oC with every 1000m increase in altitude (environmental lapse rate) from a starting average
surface temperature of about 16 oC. This is because the atmosphere is warmed at the ground surface by
incoming solar radiation first heating the ground and this latent heat is then conducted to the overlying
atmosphere above, so the higher you are away from the ground surface the less the warming by
conduction and the colder it will be. The troposphere is an unstable layer containing most of the World‘s
water vapour and therefore clouds as well as dust and other particulate pollution. Wind speeds usually
increase with height and as mentioned earlier pressure decreases with height due to decreased gravity.
At about 12km from the Earth‘s surface the Tropopause is reached, this is the boundary that
represents the limit of the Earth‘s climate and weather systems.
In the tropopause the
temperatures remain the same (about -60) despite any increase in height (this is how you know you have
reached it), this phenomena is termed an isothermal layer (simply meaning equal temperature). Jet
aircraft cruise at approximately 9000m just before the Tropopause is encountered.
The Stratosphere: The next layer is the Stratosphere, it is characterised by a steady increase in
temperature which is owed to the high concentration of atmospheric ozone (O3) which allow the
absorption of UV solar radiation from the sun but prevents some of this radiation being reflected back
to space causing warming. Winds are light in the lower layers of the stratosphere but increase with
height, pressure continues to fall and the atmosphere is much drier! The stratosphere like the two
layers above it act as a protective shield to meteorites that ‗burn up‘ before reaching its lower reaches.
At approximately 48km from the surface another isothermal layer (usually c- 25 oC) termed the
Stratopause is reached.
The Mesosphere: The next layer is the ‗middle layer‘ or Mesosphere, characterised by rapidly falling
temperatures once the upper limits of the Stratopause are left behind. The reason for the falling
temperatures is because there is no water vapour which has a warming effect and because there is no
cloud, dust or ozone to absorb incoming radiation and provide an insulating effect. The mesosphere
witnesses the lowest temperatures of up to -90oc and unimaginably strong 3000km/h winds! There is
another isothermal layer at 80km height, the top of the mesosphere known as the mesopause where
there is no change in temperature with altitude and the temperatures here can also be as cold as -90 oC.
3
Thermosphere: The uppermost layer is known as the Thermosphere because it is a layer where the
temperatures rise very rapidly with height perhaps to reach 1500 oC. This is due to the increase in the
proportion of atmospheric oxygen (O2) which like ozone (O3) absorbs atmospheric UV radiation.
The Vertical Structure of the Atmosphere
(b) The composition of the Atmosphere
The atmosphere is composed of a mixture of gases mainly but it contains some liquids and even some
solids held nearer to the surface by gravity. The composition of the atmosphere is relatively constant
in its lower reaches, 10-15km or so where it can vary in its spatial occurrence over time and thus cause
fluctuations in temperature, pressure and humidity, thus affecting weather and climate!
The
atmosphere is in hot debate especially as scientists are currently trying to find out the extent to which
man‘s release of CO2 is linked to recent global warming (see climate change notes) observed in the
geological record as well as the issue of the hole in the ozone layer (not to be mixed up with global
warming) caused by the release of CFC‘s.
4
Table showing data on composition of the atmosphere
Gas
Nitrogen
Permanent
Gases
Oxygen
Percentage by
Volume
78.09
20.95
Importance for Weather and Climate
0.2-4.0
0.03
Source of cloud formation. Reflects as well as
absorbs solar radiation keeping temperatures
constant. Provides majority of Earth‘s natural
‗greenhouse effect‘.
Absorbs long wavelength solar radiation and therefore
increases global temperatures i.e. adds to natural
greenhouse effect. Human activity releases carbon
dioxide (anthropogenic CO2) which is a major cause of
climate change.
0.00006
Absorbs incoming UV radiation
0.93
trace
No importance
trace
Absorbs/reflects atmospheric radiation e.g. volcanic
eruptions can cause cooling if dust is released into
upper atmosphere. Dust is import for cloud formation
as it forms condensation nuclei.
Affects levels of incoming solar radiation and is a
cause of acid rain.
Water
vapour
Variable
Gases
Carbon
Dioxide
No effect mainly passive
Ozone
Inert Gases
Non Gases
Pollutant
Gases
Argon
Helium,
Neon,
Krypton
Dust
Sulphur
dioxide,
nitrogen
oxide,
methane
trace
Other planetary
functions/source
Plant growth
Needed in respiration
Produced by
photosynthesis
Decreased by
deforestation
Essential for life. Can
be stored as ice/snow
Used by plants during
photosynthesis, is a
greenhouse gas that
causes warming. It is
increased by burning
fossil fuels and
deforestation
Shields animals and
plants from deadly suns
rays. Destroyed by
chlorofluorocarbons
(CFC‘s)
No use. Volcanic and
meteorite dust as well
as from soil erosion
No use the source is
industry, power
stations and car
exhausts
(c) Atmospheric heat budget (Energy in the Atmosphere)
The Earth‘s primary heat source is from the sun where it receives energy as incoming short-wavelength
radiation (insolation). It is this energy that controls our planet‘s climate and associate weather systems
which in turn control the amount of energy that the primary producers (plants) convert to stored energy
during photosynthesis. There are other sources of heat energy however, some comes from deep within
the planet‘s interior and is known as geothermal heat, this heat has originated from the Earth‘s
accretion 4.5 billion years ago as well as some of this geothermal energy owing its origin to radioactive
decay of unstable isotopes in the core. Other authors may class another source of heat being that
generated by urban settlements (but really this is just energy that ultimately has come from the sun and
has been stored chemically as fossil fuels or has come from geothermal sources).
Factors Controlling Solar Heating
There are four factors (astronomical factors) that control how much heating the Earth receives from
solar insolation. In summarising these factors we assume that no atmosphere surrounds the Earth as
the atmosphere can either absorb, reflect or scatter incoming solar radiation depending on the
proportion of many of its constituents (see table of composition above to see the effect of each
component on either absorption or reflection of energy) as it passes through the atmosphere.
5
(i) The solar constant: is the amount of solar energy (insolation) received per unit area, per time on a
surface at right-angles to the suns beam at the edge of the Earth‘s atmosphere. Despite its name
(‗constant‘) it does vary slightly due to sunspot activity on the sun‘s surface, but this is unlikely to vary
daily or yearly weather but it may influence long-term global climate change.
(ii) The distance from the sun: The Earth‘s orbit
around the sun is not a perfect circle as you may have
drawn in science classes at school but it is in fact
more of an egg shape, this is what a geographer
describes as the eccentric orbit or eccentricity of
the orbit. This oval shaped orbit is enough to cause
6% variation in the solar constant as the sun appear
to be either closer or further away depending where
you are in the orbital cycle.
(iii) The altitude of the sun in the sky:
The equator receives more
energy than the poles as the suns energy strikes it head on and at times
exactly 90o (i.e. during the spring/vernal and the autumnal equinoxes – see
later notes.). In comparison in the higher latitudes for instance 60 oN and
60oS of the equator, the sun hits the surface of the earth at a lesser angle
(a more oblique angle) and therefore there are more atmospheres to travel
through and a greater surface area to heat up so the amount of heating in
these regions is less.
(iv) The length of night and day: The Earth is tilted on its axis at
23.5 o this controls the length of day and night (I hope you would
agree). If you are in a north of 66.5 o or in a region south of 66.5 o at
certain times of year these regions
receive no insolation at certain
times of the year.
Not all of the incoming solar radiation reaches the Earth‘s surface,
approximately half (45%) reaches the surface. So what happens to
the rest? Well a large amount of incoming insolation is absorbed by
ozone (O3), water vapour, carbon dioxide, ice particles or dust
particles which all reduce the amount that reaches the surface.
Thick cloud cover also plays a role as 10% may be reflected back to
space, similarly reflection back to space occurs on the surface itself
for example on snowfields 20% of the incoming radiation can be
reflected. The ratio between incoming radiation and the amount
reflected back to space expressed as a percentage is known as the
Earth‘s albedo (not to be mistaken with libido).
Both deforestation and overgrazing increase the Earth‘s albedo
causing less cloud formation and precipitation so desertification can
result! See below for some common factors controlling albedo.
Scattering of solar radiation occurs when gas molecules divert its
path and send it off in all directions, some will reach the surface and
this is called diffuse radiation, the scattering occurs at the blue end
of the EM spectrum and this is what causes the sky to appear blue.
As a result of absorption, reflection and scattering of the 45% of the
radiation that reaches the surface about 24% of incoming radiation
6
reaches it directly and a further 21% will reach the surface by diffuse means. Once in contact with the
ground incoming radiation will heat the Earth‘s surface and the ground will radiate heat back towards
space where 94% of this energy will be absorbed (only 6% lost) by the Earth‘s greenhouse gases (water
vapour, methane and CO2). The outgoing terrestrial radiation is long wavelength or infra red radiation.
Factor
Cloud type: Thin Clouds
Thicker Stratus Clouds
Cumulo-Nimbus Clouds (Thunder Clouds)
Percentage
30-40
50-70
90
,<10
15
25
40
85
Surface Type: Oceans and Dark Soils
Coniferous Forests and Urban areas
Grasslands and deciduous forests
Light coloured deserts
Snowfields
Summary of what happens to incoming solar radiation
In order for the Earth to remain at a fairly constant temperature i.e. not to heat up or to cool down, a
state of balance or equilibrium must exist between inputs (incoming insolation) and outputs (outgoing
terrestrial radiation).
However there are significant spatial differences within the atmosphere,
although heat is lost throughout the entire atmosphere via terrestrial radiation, the heating of the
globe in the first place is unequal. It is in fact true that throughout the year the equator receives the
majority of solar insolation and from 40 oN to 35oS there is a net surplus of radiation i.e. inputs are
greater than outputs (positive heat balance) whereas at the poles there are less inputs than outputs i.e.
a net deficit of radiation (negative heat balance). This unbalanced heating of the Earth leads to some
fascinating consequences that drive the weather systems on the planet! The NET result of unequal
heating must be therefore the transfer of heat from one place to another as the Earth tries to spread
out this unequal heat. This is what drives the large and small scale atmospheric circulations (see
atmospheric circulation in later notes).
Key Terms:
Jet stream: a band of very strong winds, up to 250km/h, which occurs in certain locations in the upper atmosphere on
average 10, 000m. May be 100‘s of Km wide with a vertical thickness of 1-2000 m. They are the product of a large
temperature gradient between two air masses. There are two main locations: The Polar Front Jet where polar and
subtropical air meet over western Atlantic Ocean (c. 60o N and 60oS) and the Sub-Tropical Jet Front also westerly and
associated with the pole ward ends of the Hadley Cells (c. 25 oN and 25oS).
Airmass: a large body of air with similar temperature and humidity characteristics that it has acquired from where it has
originated over.
7
The Earths Heat Budget
Horizontal Heat Transfers – 80% of heat is carried away from the tropics and is carried by winds
examples of which include the jet stream, hurricanes and depressions (see later notes on each of
these). The remaining 20% is carried by ocean currents pole wards.
Vertical Heat Transfers: Energy is transferred from the surface of the Earth vertically by radiation,
conduction and convection. Latent heat (heat expended when substances change state) also helps
transfer energy.
For example evaporation of water from the ocean expends heat causing cooling
whereas condensation of water droplets which leads to cloud formation and precipitation releases heat
causing warming in the upper atmosphere. The vertical motion of air can transfer heat from areas of
high heat budget (such as the equator) by cooling of air as an air mass rises with altitude until it is
transferred horizontally by higher level flows of air transferring warm air to the poles.
Global factors affecting insolation and heating of the atmosphere
Factors that control the amount of insolation received at any point and hence the balance between
incoming and out coming radiation (heat budget), will vary considerably spatially (space) and temporally
(time).
(a) Long Term Factors controlling insolation:
(i) Altitude: As discussed earlier the atmosphere is not warmed directly by the sun but by the
radiation of heat from the Earth‘s surface that can then be spread by conduction and convection.
Two things happen with altitude, there is a decreasing land area from which heat can be radiated, air
is under less pressure at altitude and the molecules in the air a therefore fewer and wider spaced
per unit area this mean air at height loses its ability y to retain heat, the phenomena known as
environmental lapse rate, 6.4oC/1000m. The opposite to this sometimes happens under high
8
pressure (anticyclonic) conditions in the UK in the winter especially in the early morning where it
gets warmer with altitude, this is therefore opposite to the norm and is referred to as temperature
inversion.
(ii) Altitude of the sun: As the angle of the sun in the sky relative to the land surface decreases (or
comes more oblique) the amount of atmosphere the rays travel through and the amount of la nd area
being heated by the rays both increases causing more insolation to be lost through scattering,
absorption, reflection and radiation. Places at lower latitudes therefore have higher temperatures
than those at higher latitudes.
(iii) Proportion of land and sea: The sea is obviously more transparent than the land and is able to
absorb more heat energy to a depth of about 10m. Waves and currents can also then transfer this
heat to greater depths. The sea also has what is known as a greater heat capacity; allthis means is
that it takes more energy to raise the temperature (say an increase of one degree) of the sea than
that of the land. The specific heat capacity of water is roughly twice that of the land, therefore in
summer the ocean heats up more slowly than the continents on land but in winter the opposite is true
the continents loses its heat more quickly than the ocean, so oceans act as thermal reservoirs of
energy! This has some interesting implications, for instance have you noticed that coastal locations
always have lower annual temperature ranges (i.e. difference between highest and lowest
temperatures) compared to continental interiors which have larger temperature ranges i.e. warmer
summers but colder winters – this is known as continentality.
(iv) Prevailing Winds: The characteristics of a wind in terms of its temperature and humidity are
controlled by the type of area it passes over. A wind passing over a sea tends to be warmer in
winter but cooler in summer (see notes above on item 3 for explanation why) compared to a wind
passing over land. Winds passing over land tend to be drier and winds passing over oceans tend to
pick up more moisture.
(v) Ocean Currents: Ocean current are fundamental in the horizontal transfer of energy around the
globe. Warm ocean currents carry energy from the areas of excess solar heating at the equator
pole wards and hence give regions that they pass nearer the poles warmer maritime climates.
Returning cold currents carry cold water from the poles to the equator and hence have a cooling
effect on climates this whole phenomena is known as a the oceanic circulation or conveyor. In the
British Isles at approximately 58 oN we have an anomalously mild climate compared to other locations
of similar latitude. This is in part due to the prevailing winds and close proximity to the sea but
more importantly the ocean current that passes to the west of the British Isles bringing warmer
water from the equator known as the North Atlantic Drift (see diagram below).
9
A general model of oceanic circulation
(b) Short Term Factors controlling insolation:
(i)
Seasonal Changes: at the spring (March 21st) and autumn (September 21st) equinoxes the sun
is directly over head at the equator and the equator will receive the maximum insolation
during these periods. But during the summer solstice (June 21 st) and the winter solstice the
sun is overhead at the tropics. The hemisphere receiving ‗summer‘ will receive the maximum
insolation.
(ii)
Length of day and night: Insolation can only reach the surface during daylight hours and
peaks at noon. There is no variation in day length at the equator and hence more constant
insolation is received. However, at the poles day lenght can vary greatly in winter at the
North Pole there would be continuous darkness but in summer the converse is true,
continuous 24h daylight! Amazing.
(iii)
Aspect: Refers to way in which slopes face. In the northern hemisphere the slopes that
face north and north east receive less solar heating as they are in shadow for most of the
year and are therefore cooler. – Remember corries form for this reason facing N/NE!
(iv)
Cloud Cover: Clouds act by reducing the amount of incoming solar radiation during the day
and therefore can lower daytime temperatures. They also act as insulation blankets during
the night therefore they keep the surface temperature high.
In a desert, daytime
temperatures would be high due to lack of cloud but nigh time temperatures could be very
low as there is little insulation prevention heat loss to space, deserts therefore show a large
diurnal range in temperature! I the tropical regions in the summer, temperatures can often
10
take a small dip due to the presence of the ITCZ and the associated higher cloud cover as
well as increased precipitation (see later notes of tropical continental climates).
(v)
Urbanisation: Alters the Earth‘s albedo and creates ‗head islands‘. Urban development
disrupts the climatic properties of the surface and the atmosphere. Thus in turn altering
the exchanges and budgets of heat, mass and momentum which control the climate of any
site. Land clearance, drainage and paving leads to a new microclimate on each site. The roof
level of a city (the urban canopy) affects the air near the surface but it also has downwind
effects away from a city. Buildings tend to cause greater air turbulence as well as cyclonic
wind action and uplift. Jets, vortices or gusts can be common between tall buildings e.g.
Salford quays Manchester. Others may be artificially sheltered. In rural areas the ground
level climate returns quickly but the area of the urban canopy layer takes much longer to
recover to a natural state. It can cause slower movements of weather fronts for instance
due to increased frictional drag.
(d) General Atmospheric Circulation: The Tricellular Model
You should already be aware from earlier in the section that there is a surplus of energy at the equator
and deficit (shortage) at the poles and in the upper atmosphere. Because of the Earth‘s tilt of axis at
23.5o, in low latitudes solar radiation arrives almost at 90o to the surface and there is less atmosphere
for it to pass through so the surface is heated more intensely but in higher latitudes the solar radiation
arriving is oblique to the surface and there is more atmosphere to pass through and hence less heating
(see diagram p.20). Based on this unequal heating surely the equator would get hotter and hotter and
the poles cooler and cooler, this is not the case. We therefore think it is logical that this imbalance is
simply removed by a simple transfer known as a convection cell (Think about how heat is transferred in a
pan of beans, this is simple convection).
Modern advances in meteorology using satellites and radiosondes have given us a better picture of how
this works but still our understanding has not progressed too much more that a simple 3 cell
(Tricellular) model proposed in 1941 by Rossby. See below:
REMEMEBER Naming
Winds: Winds always
get their name from
the direction in which
they blow from! Not
where they blow to!!!
11
Students often find this section quite hard but I would argue it is quite simple really, it works like this:
Explaining the Hadley Cell: The overhead sun causes intense solar radiation and therefore heating
(insolation) to heat the equatorial regions more than the poles. As hot air is less dense than cold air it
rises (as its particles are further apart and posses more energy – this is how a hot air balloon works). As
the air rises it cools, once it cools to the temperature of the surrounding air it stops rising (i.e. at the
tropical tropopause). It is in this region that an intense area of convectional uplift and subsequent
cumulonimbus cloud formation occurs. This is referred to as the Inter Tropical Convergence Zone
(ITCZ), in this area thunderstorms and rain are common. Once the air stops rising it begins to move
away from the equator and towards the poles. As the air cools further, becomes denser and the Coriolis
Force (the force caused by the Earth‘s rotation) diverts its flow (as a westerly flow) the air is forced to
slow down and subside (sink). This subsiding air forms the descending limb of the Hadley cell! The
descending limbs of the Hadley cell subside at about 30oN and 30oS of the equator to form a region of
sub-tropical anticyclones (high pressure areas) at the surface. At the tropics the pressure is HIGH
because the air is sinking and the weight of the overlying air causes high pressure. The result of this
sinking air (preventing convectional uplift and therefore cloud formation) in the tropics is clear skies
and warm stable conditions. However, the converse is true at the equator where the air is rising,
removing weight of overlying air and hence causing equatorial low pressure at the surface. The ITCZ
occurs above the surface at the equator due to convectional uplift and condensation of moisture forming
huge cumulonimbus rain clouds and at the surface an area of gentle variable winds known as the
doldrums prevail! At the equator there is surface convergence of winds because of the difference in
pressure at the tropics compared to the equator, the easterlies (NE Trades) are returning from 30oN
and the westerlies are returning from 30oS (The SW Monsoon). The reason the winds blow in this way
is the same reason a bike tyre deflates when you get a puncture. Air always moves from high (inside the
tyre) to low pressure (the air surrounding tyre). Remember the words of Mr Richardson “Winds always
blow from high to low!”
Explaining the Polar Cell: The polar cell in the original Tricellular model was seen as a response of to
cold air sinking in the Polar regions and returning to lower latitudes as easterly winds.
Explaining the Ferrel Cell: The Ferrel cell was proposed because it was thought it was a response to
the movements of air set up by the other two cells. Some of the remaining air at the descending limb of
the Hadley cell that is not heading back to the equator is sent pole wards forming warm southwesterlies which pick up moisture as they pass over oceans. The warm south-westerlies meet cold arctic
air at the polar front (60 oN) and are forced to rise forming polar low pressure at 60oN and 60oS and
triggering the rising limb of the Ferrel and Polar cells respectively. The air at 60 oN and 60oS rises to a
lower altitude than that of the Hadley cell until it is the same temperature as the air around it (midlatitude Tropopause is reached), unstable conditions prevail here and produce heavy cyclonic rainfall
associated with the mid-latitude depressions.
Conclusion: The Tricellular model although basic goes to some extent to explain why where there is
descending air, the World‘s major hot deserts occur and where rising air occurs areas of intense
precipitation are common for instance at the equator and in the mid latitudes associated with low
pressure and depressions.
Coriolis force: effect of Earth‘s rotation on airflow.
In N.
Hemisphere deflection of air to the right and in S. Hemisphere
deflection is to left.
Hence in Britain it explains why air
appraoching from tropics comes from a SW direction instead of S.
12
Front: A boundary between a warm air mass and a cold air mass i.e. where to air masses meet
causing uplift, condensation, cloud formation and subsequent frontal rainfall.
Geostropic Winds: a condition in the mid latitudes where winds blow parallel to isobars because
the pressure gradient and the Coriolis force are in balance.
Tricellular Model: Cross Section of atmosphere showing winds, fronts and pressure caused by main
cells.
Relating
the
Atmospheric
Circulation Model to Features of
Earth’s climate
Equatorial low pressure is the
result of the rising limb of the
Hadley cell removing ‗weight‘ of air
from the surface. At this latitude
a variable calm winds known to
sailors as the Doldrums exist. In
the tropical latitudes between 3035oN /S calm warm conditions
occur due to the descending limb
of the Hadley cell causing high
pressure; these latitudes are
termed
the
horse
latitudes.
These winds were first recognised
by the Spanish sailors transporting
horses to the West Indies who often found themselves becalmed for weeks in wind less seas and were
forces to throw the horses over board in order to conserve food and water. Winds known as the Northeast trades blow from the high pressure in the tropics to the low pressure at the equator. They blow
from this NE direction instead of from N due to the effects of the Coriolis force. Some of the air in
the tropics does not return to the equator and blows from the south west to give the Warm SouthWesterlies, when these winds come in contact with cold polar air returning as easterlies they form the
polar front which causes the mid latitude depressions (low pressure systems) which dominate the UK
weather. At the Polar front the warmer south-westerly air is forced to rise above the cold polar air
which under-cuts it causing the rising limb of both the ferell and polar cells. The result is low pressure
at the surface here.
Geostrophic Winds
Winds generally always blow from high
pressure to low pressure down a pressure
gradient.
Remember bike tyre analogy, if
you pop a bike tyre air moves from high
pressure in the tyre to low pressure outside
the tyre.
Variations in temperature and
altitude cause air pressure to change.
Normal winds are the result
of this
movement down a pressure gradient from
higher to lower pressure. However, due to
the Coriolis acting against the pressure
gradient force in certain latitudes especially
mid latitudes the forces are balanced and a
high altitude wind is deflected at 90
degrees or parallel to the isobars (see
diagram right).
13
4. The Climate of the British Isles
The British Isles is unique in its variable climate and its location on the edge of a continent trapped
between two seas and affected by the passage of 5 different airmasses! The UK climate is classes as a
temperate climate; this means there are very rarely extremes in its climate such as rainfall, droughts,
winds and temperature.
(a) Basic Climate Characterisics:
(i)
Temperature
Jan Average
July Average
The temperature maps for January and July point to some key influences on the factors controlling
the climate.
Temperatures in July appear to reach a peak in southern regions and generally decrease northwards.
This can be explained as there is a lower amount of insolation at higher latitudes. Also in July inland
regions appear to have higher temperatures than places nearer to the coasts as the cooler sea has less
influence on places inland.
This concept of warmer summer temperatures inland is known as
continentality and it is usually seen on larger land areas than in the British Isles but all the same it is
still evident here form these maps. It can also be seen that relief of the land also has an effect, the
higher the altitude the lower the temperatures both in summer and winter. Remember that the lapse
rate is approximately 6.4 oC/100m ie. temperatures drop by this amount for every 100m height gained.
British mountains are not much higher than 1000m but in the highest peaks you might expect a
temperature drop of 8oC at the summit compared to in the valley. For instance the Southern Uplands
have lower temperatures than the more northerly central valley between Edinburgh and Glasgow. January
temperatures are higher in the areas bordering the Irish sea in the west as ocean currents and
prevailing winds have a warming effect (remember in winter the sea has a warming effect but in summer a
cooling effect on the land). The North Atlantic Drift brings warmer Gulf Stream waters to this
western area of the B.I. The warming effect is more greatly seen in winter than summer, so much so
that even in the Scottish winter many towns on the west coast such as Plockton enjoy warmer
temperatures than they should really get for their latitude. In Plockton itself Palm trees grow! There
14
is a north – south skewed rise in temperatures with Anglesey in west Wales being considerably
warmer than the Wash further north and to the east.
(ii)
Precipitation
The West
and
North
receive
the
most
precipitation and the South and particularly the
East of the country receive the least. The two
most important factors that control rainfall are
direction of the prevailing wind and altitude.
Relief or orogenic rainfall:
occur when moist
westerly air is forces to rise over mountains such
as the lake District, Snowdonia and the Pennines.
Here water vapour cools and condenses past its
dew point (where air becomes saturated) to form
clouds and rainfall.
For instance Keswick in the
West (Cumbria) receives 1500mm/year where
Tynemouth on the east coast at a similar latitude
receives almost a two thirds less at 660mm. This is
known as the rain shadow in the east as water is
precipitated in the mountains in the west and on
the Pennines. As the air sinks it warms and has
less chance of reaching its dew point as warmer air
can hold more water and therefore is less likely to
result in rain.
Frontal Rainfall: Britain is particularly bombarded with frontal
systems. Fronts occur when a wedge or portion of warm air is
forced to rise and cool above a wedge of cooler air.
This
commonly occurs when polar air undercuts warmer tropical air
resulting in cooling, condensation and cloud formation at a front
(see more on fronts later). This is especially common in the
winter when scores of depressions originating over the Atlantic
hit the shores of Britain!
Convectional Rainfall:
Some rainfall can attributed to
intense heating of the ground in summer months which leads
to less dense air rising. Rising encourages cooling past the
dew point and condensation occurs leading to cloud
formation. In summer months this usually occurs by mid
afternoon and the result is towering cumulo-nimbus thunder
15
storm clouds which yield sudden intense cloud bursts as soon
5
as water droplets are large enough to overcome the force of the updrafts caused by heating. This is
especially commoon over southern and eastern Britain.
(iii) Wind
The most common wind direction in England is from
the Southwest. But this varies day to day and
often northerly or north easterly winds are
common in winter. The strongest winds are found
in the North and West of the country as they face
the direction of the prevailing winds passing over
the Atlantic.
Attitude also causes higher wind
speeds as there are fewer obstructions in the way
to inhibit wind flow.
Wind speeds generally
increase with height.
The windiest places are
mountain and hill tops such as great Dun Fell,
Cumbria where in a third of the days a year wind
speeds are classes as gale force (73km/h for
more than 10 mins).
(b) Air Masses affecting the British Isles
Air masses
Air masses are parcels of air that bring distinctive
weather features to the country. An air mass is a body
or 'mass' of air in which changes in temperature and
humidity are relatively slight. That is to say the air
making up the mass is very uniform in temperature and
humidity. An air mass is separated from an adjacent
body of air by a weather front. An air mass may cover
several millions of square kilometres and extend
vertically throughout the troposphere.
Types of Cloud
Cirrus - a tuft or filament (e.g. of hair)
Cumulus - a heap or pile
Stratus - a layer
Nimbus - rain bearing
There are now ten basic cloud types with names based on combinations of these words (the word 'alto',
meaning high but now used to denote medium-level cloud, is also used).
Clouds form when moist air is cooled to such an extent it becomes saturated. The main mechanism for
cooling air is to force it to rise. As air rises it expands - because the pressure decreases with height in
the atmosphere - and this causes it to cool. Eventually it may become saturated and the water vapour
then condenses into tiny water droplets, similar in size to those found in fog, and forms cloud. If the
16
temperature falls below about minus 20 °C, many of the cloud droplets will have frozen so that the cloud
is mainly composed of ice crystals. The ten main types of cloud can be separated into three broad
categories according to the height of their base above the ground: high clouds, medium clouds and low
clouds.
High clouds are usually composed solely of ice crystals and have a base between 18,000 and 45,000 feet
(5,500 and 14,000 metres).
Cirrus - white filaments
Cirrocumulus - small rippled elements
Cirrostratus - transparent sheet, often with a halo
Medium clouds are usually composed of water droplets or a mixture of water droplets and ice crystals,
and have a base between 6,500 and 18,000 feet (2,000 and 5,500 metres).
Altocumulus - layered, rippled elements, generally white with some shading
Altostratus - thin layer, grey, allows sun to appear as if through ground glass
Nimbostratus - thick layer, low base, dark. Rain or snow falling from it may sometimes be heavy
Low clouds are usually composed of water droplets — though cumulonimbus clouds include ice crystals and have a base below 6,500 feet (2,000 metres).
Stratocumulus - layered, series of rounded rolls, generally white with some shading
Stratus - layered, uniform base, grey
Cumulus - individual cells, vertical rolls or towers, flat base
Cumulonimbus - large cauliflower-shaped towers, often 'anvil tops', sometimes giving
thunderstorms or showers of rain or snow
High pressure or anticyclone
In an anticyclone (also referred to as a 'high') the winds tend to be light and blow in a clockwise
direction. Also the air is descending, which inhibits the formation of cloud. The light winds and clear
skies can lead to overnight fog or frost. If an anticyclone persists over northern Europe in winter, then
much of the British Isles can be affected by very cold east winds from Siberia. However, in summer an
anticyclone in the vicinity of the British Isles often brings fine, warm weather.
Clouds
A classification of clouds was introduced by Luke Howard (1772-1864) who used Latin words to describe
their characteristics.
Cirrus - a tuft or filament (e.g. of hair)
Cumulus - a heap or pile
Stratus - a layer
Nimbus - rain bearing
There are now ten basic cloud types with names based on combinations of these words (the word 'alto',
meaning high but now used to denote medium-level cloud, is also used). Clouds form when moist air is
cooled to such an extent it becomes saturated. The main mechanism for cooling air is to force it to rise.
As air rises it expands - because the pressure decreases with height in the atmosphere - and this causes
it to cool. Eventually it may become saturated and the water vapour then condenses into tiny water
droplets, similar in size to those found in fog, and forms cloud. If the temperature falls below about
minus 20 °C, many of the cloud droplets will have frozen so that the cloud is mainly composed of ice
crystals.
17
The ten main types of cloud can be separated into three broad categories according to the height of
their base above the ground: high clouds, medium clouds and low clouds.
High clouds are usually composed solely of ice crystals and have a base between 18,000 and 45,000 feet
(5,500 and 14,000 metres).
Cirrus - white filaments
Cirrocumulus - small rippled elements
Cirrostratus - transparent sheet, often with a halo
Medium clouds are usually composed of water droplets or a mixture of water droplets and ice crystals,
and have a base between 6,500 and 18,000 feet (2,000 and 5,500 metres).
Altocumulus - layered, rippled elements, generally white with some shading
Altostratus - thin layer, grey, allows sun to appear as if through ground glass
Nimbostratus - thick layer, low base, dark. Rain or snow falling from it may sometimes be heavy
Low clouds are usually composed of water droplets — though cumulonimbus clouds include ice crystals and have a base below 6,500 feet (2,000 metres).
Stratocumulus - layered, series of rounded rolls, generally white with some shading
Stratus - layered, uniform base, grey
Cumulus - individual cells, vertical rolls or towers, flat base
Cumulonimbus - large cauliflower-shaped towers, often 'anvil tops', sometimes giving
thunderstorms or showers of rain or snow
Types of Cloud
Interpreting weather maps
Isobars - The lines shown on a weather map (synoptic map) are isobars - they join points of equal
atmospheric pressure. The pressure is measured by a barometer, with a correction then being made to
give the equivalent pressure at sea level. Meteorologists measure pressure in units of millibars (mb). In
the British Isles the average sea-level pressure is about 1013 mb, and it is rare for pressure to rise
above 1050 mb or fall below 950 mb. Charts showing isobars are useful because they identify features
such as anticyclones and ridges (areas of high pressure) and depressions and troughs (areas of low
pressure), which are associated with particular kinds of weather. These features move in an essentially
predictable way
18
There are three important relationships between isobars and winds.
The closer the isobars, the stronger the wind.
The wind blows almost parallel to the isobars.
The direction of the wind is such that if you stand with your back to the wind in the northern
hemisphere, the pressure is lower on the left than on the right.
These make it possible to deduce the wind flow from the isobars.
Winds - The direction given for the wind refers to the direction from which it comes. For example, a
westerly wind is blowing from the west towards the east. In general, the weather is strongly influenced
by the wind direction, so information about the wind provides an indication of the type of weather likely
to be experienced. However, this approach is effective only if the wind is blowing from the same
direction for some time. A marked change in wind direction usually indicates a change in the weather.
Northerly winds tend to bring relatively cold air from polar regions to the British Isles. Similarly,
southerly winds tend to bring relatively warm air from the tropics. The characteristics of the air are
also affected by its approach to the British Isles. Air picks up moisture if it travels across the sea, but
remains relatively dry if it comes across the land.
Fronts - The boundary between two different types of air mass is called a front. In our latitudes a
front usually separates warm, moist air from the tropics and cold, relatively dry air from polar regions.
On a weather chart, the round (warm front) or pointed (cold front) symbols on the front, point in the
direction of the front's movement. Fronts move with the wind, so they usually travel from the west to
the east. At a front, the heavier cold air undercuts the less dense warm air, causing the warm air to rise
over the wedge of cold air. As the air rises it cools and condensation occurs, thus leading to the
formation of clouds. If the cloud becomes sufficiently thick, rain will form. Consequently, fronts tend to
be associated with cloud and rain. In winter, there can be sleet or snow if the temperature near the
ground is close to freezing
This means that as a cold front passes, the weather changes from being mild and overcast to being cold
and bright, possibly with showers (typical of cold polar air travelling over the sea). The passage of the
front is often marked by a narrow band of rain and a veer in the wind direction.
As the warm front approaches, there is thickening cloud and eventually it starts to rain. The belt of rain
extends 100-200 miles ahead of the front. Behind the front the rain usually becomes lighter, or ceases,
but it remains cloudy. As a warm front passes, the air changes from being fairly cold and cloudy to being
warm and overcast (typical of warm air from the tropics travelling over the sea). Also there is a
clockwise change in wind direction, and the wind is said to 'veer'.
19
Weather fronts: A weather front is simply the boundary between two air masses.
Cold front
Warm front
Occluded
front
This is the boundary between warm air and cold
air and is indicative of cold air replacing warm
air at a point on the Earth's surface.
On a synoptic chart a cold front appears blue.
The presence of a cold front means cold air is
advancing and pushing underneath warmer air.
This is because the cold air is 'heavier' or
denser, than the warmer air.
Cold air is thus replacing warm air at the
surface. The symbols on the front indicate the
direction the front is moving.
The passage of a cold front is normally marked at
the earth's surface by a rise of pressure, a fall
of temperature and dew point, and a veer of wind
(in the northern hemisphere). Rain occurs in
association with most cold fronts and may
extend some 100 to 200 km ahead of or behind
the front. Some cold fronts give only a shower
at the front, while others give no precipitation.
Thunder may occur at a cold front
This is the boundary between cold air and
warm air and is indicative of warm air
replacing cold air at a point on the Earth's
surface
On a synoptic chart a warm front appears red
The presence of a warm front means warm air
is advancing and rising up over cold air. This is
because the warm air is 'lighter' or less
dense, than the colder air. Warm air is thus
replacing cold air at the surface. The symbols
on the front indicate the direction the front
is moving.
As a warm front approaches, temperature and
dew-point within the cold air gradually rise and
pressure falls at an increasing rate.
Precipitation usually occurs within a wide belt
some 400 km in advance of the front. Passage
of the front is usually marked by a steadying
of the barometer, a jump in temperature and
dew point, a veer of wind (in the northern
hemisphere), and a cessation or near cessation
of precipitation.
1. Wind Speed
3. Precipitation
These are more
complex than cold or
warm fronts. An
occlusion is formed
when a cold front
catches up with a
warm front
When a cold front
catches up with a
warm front the warm
air in the warm
sector is forced up
from the surface.
On a synoptic chart
an occluded front
appears purple.
2.Cloud Cover
1.
Weather Station Entry
Synoptic Chart Symbols
20
21
22
(i) Infancy
Initially a warm air mass such as one
from the tropics, meets a cooler air
mass, such as one from the polar
regions. Depressions which affect the
UK normally originate over the Atlantic
Ocean
(ii) Maturity
The warm air rises up over the colder
air which is sinking. A warm sector
develops between the warm and cold
fronts. The mature stage of a
depression often occurs over the UK
(iii) Occlusion
The cold front travels at around 40
to 50 miles per hour, compared to the
warm front which travels at only 20
to 30 miles per hour. Therefore the
cold front eventually catches up with
the warm front. When this occurs an
occlusion is formed.
(iv) Death
Eventually the frontal system dies as all the warm air has been pushed up from the surface and all that
remains is cold air. The occlusion dies out as temperatures are similar on both sides. This stage normally
occurs over Europe or Scandinavia.
Weather changes associated with the passage of a depression
A depression is an area of low atmospheric pressure. It is represented on a weather map by a system of
closely drawn isobars with pressure decreasing towards the centre.
Depressions usually move rapidly from
west to east across the British Isles.
Winds move in an anticlockwise direction
around the centre of the depression.
A depression affecting the British Isles originates in the north
meet to form a front. The two air masses involved are:
These winds are usually quite strong, in
fact the closer the isobars are together
the stronger the winds will be.
Atlantic where two different air masses
polar maritime air (Pm) - air from the northwest Atlantic, which is cold, dense and moist
tropical maritime air (Tm) - air from the southwest, which is warmer, less dense and also moist
23
These two bodies of air move towards each other, with the warmer, less dense Tm air from the south
rising above the colder, more dense Pm air from the north. The rising air twists due to the rotational
effect of the Earth's spin. This twisting vortex causes a wave or kink to be produced in the front
forming
out
in
the
Atlantic,
which
increases
in
size
to
become
a
depression.
This means that as a cold front passes, the weather changes from being mild and overcast to being cold
and bright, possibly with showers (typical of cold polar air travelling over the sea). The passage of the
front is often marked by a narrow band of rain and a veer in the wind direction.
As the warm front approaches, there is thickening cloud and eventually it starts to rain. The belt of rain
extends 100-200 miles ahead of the front. Behind the front the rain usually becomes lighter, or ceases,
but it remains cloudy. As a warm front passes, the air changes from being fairly cold and cloudy to being
warm and overcast (typical of warm air from the tropics travelling over the sea). Also there is a
clockwise change in wind direction, and the wind is said to 'veer'.
(d) Origin and Nature of Anticyclones:
Associated weather conditions in summer and winter
An anticyclone is an area of relatively high atmospheric pressure. It is represented on a weather map by
a system of widely spaced isobars with pressure increasing towards the centre.
Anticyclones move slowly and remain
stationary over an area for several
days or weeks (blocking anticyclone).
Warm dry anticyclonic conditioins summer.
In an anticyclone (also referred to as a 'high') the winds tend to be light and blow in a clockwise
direction. Also the air is descending, which inhibits the formation of cloud. The light winds and clear
skies can lead to overnight fog or frost.
24
If an anticyclone persists over northern Europe in winter, then much of the British Isles can be
affected by very cold east winds from Siberia. However, in summer an anticyclone in the vicinity of the
British Isles often brings fine, warm weather.
Summer
In summer, anticyclones mean:
hot daytime temperatures - over 25 C.
cooler night time temperatures - may
not fall below 15 C.
clear skies by day and night generally
hazy sunshine may exist in some areas
early morning mists/fogs will rapidly
disperse
heavy dew on ground in morning
east coast of Britain may have sea fogs or
advection fog caused by on-shore winds
thunderstorms may be created due to
convectional uplift.
Pressure: High to subsiding air.
Wind Direction: Clockwise, blowing outwards from the centre of high
pressures.
Wind Speed: Calm or gentle winds due to gentle pressure grsdients.
Relative Humidity: Low as descending air is warming and encourages evaporation rather than condensation.
Cloud: Often cloudless - although heat of day can produce thermals leading to cumulo-nimbus clouds.
Precipitation: Usually dry due to sinking air, apart from mist and dew in early mornings (radiation cooling) and the
risk of a thunderstorm (convectional uplift) after a few days of high pressure.
Temperature: Very warm/hot during day and cool at night due to absence of cloud, intense insolation and radiation.
Advection Fog
Forms when warm air passes over or meets cold air, to give rapid cooling. This type of fog forms when
the air is over saturated with water droplets. It is common on the north east coast of Scotland and
northern England and the IOM. Salt from the sea acts as a nucleus for fog condensation. Oceans usually
retain their heat for longer than land so when the warmer moist air over the sea comes in contact with
colder dry air blowing from the land condensation and fog formation prevails. This also works if the moist
warm air is blowing off the sea! These fogs can often be persistent and recurring over several days.
Winter
In winter, anticyclones result in:
cold daytime temperatures - below
freezing to a maximum of 5 C.
very cold night time temperatures below freezing, with frosts.
clear skies by day and night generally.
Low level cloud may linger, and radiation
fogs may remain in low-lying areas as a
result of temperature inversion.
high levels of atmospheric pollution in
urban areas, caused by a combination of
subsiding air and lack of wind.
Temperature Inversion:
Is where temperature decrease with altitude is much less or in extreme
cases temperature actually increases with altitude. Temperature inversions happen when high pressure
dominates and can form in various ways:
(i)
Radiate cooling of air near ground at
(iv)
Radiative heating of the upper
night
Atmosphere
(ii)
(iii)
Advective cooling – warm air over
cold air or cold surface
Warm airmass undercutting cold air
mass at a front
25
e) Storm events and responses to them – The ‘Great Storm’ of 1987
With winds gusting at up to 100mph, there was massive
devastation across the country and 18 people were killed.
About 15 million trees were blown down. Many fell on to roads
and railways, causing major transport delays. Others took
down electricity and telephone lines, leaving thousands of
homes without power for more than 24 hours.
Buildings were damaged by winds or falling trees. Numerous
small boats were wrecked or blown away, with one ship at
Dover being blown over and a Channel ferry the Hengist was
blown ashore near Folkestone. While the storm took a human
toll, claiming 18 lives in England, it is thought many more may have been hurt if the storm had hit during
the day.many parts of the UK in the middle of October 1987.
This is the Hengist passenger ferry that was
washed up on to the shores of the The
Warren in Folkestone during the great Storm.
The storm gathers
Four or five days before the storm struck, forecasters predicted severe weather was on the way. As
they got closer, however, weather prediction models started to give a less clear picture. Instead of
stormy weather over a considerable part of the UK, the models suggested severe weather would pass to
the south of England - only skimming the south coast.
During the afternoon of 15 October, winds were very light over most parts of the UK and there was
little to suggest what was to come. However, over the Bay of
Biscay, a depression was developing. The first gale warnings for
sea areas in the English Channel were issued at 6.30 a.m. on 15
October and were followed, four hours later, by warnings of
severe gales.
At 12 p.m. (midday) on 15 October, the depression that
originated in the Bay of Biscay was centred near 46° N, 9° W
and its depth was 970 mb. By 6 p.m., it had moved north-east to
about 47° N, 6° W, and deepened to 964 mb.
At 10.35 p.m. winds of Force 10 were forecast. By midnight, the
depression was over the western English Channel, and its central
pressure was 953 mb. At 1.35 a.m. on 16 October, warnings of
Force 11 were issued. The depression moved rapidly north-east,
filling a little as it went, reaching the Humber estuary at about
5.30 am, by which time its central pressure was 959 mb.
Dramatic increases in temperature were associated with the
passage of the storm's warm front.
Warning the public
During the evening of 15 October, radio and TV forecasts mentioned strong winds but indicated heavy
rain would be the main feature, rather than strong wind. By the time most people went to bed,
exceptionally strong winds hadn't been mentioned in national radio and TV weather broadcasts. Warnings
of severe weather had been issued, however, to various agencies and emergency authorities, including
the London Fire Brigade. Perhaps the most important warning was issued by the Met Office to the
Ministry of Defence at 0135 UTC, 16 October. It warned that the anticipated consequences of the
storm were such that civil authorities might need to call on assistance from the military.
In south-east England, where the greatest damage occurred, gusts of 70 knots or more were recorded
continually for three or four consecutive hours. During this time, the wind veered from southerly to
south-westerly. To the north-west of this region, there were two maxima in gust speeds, separated by
a period of lower wind speeds. During the first period, the wind direction was southerly. During the
26
latter, it was south-westerly. Damage patterns in south-east England suggested that whirlwinds
accompanied the storm. Local variations in the nature and extent of destruction were considerable.
How the storm measured up
Comparisons of the October 1987 storm with previous severe storms were inevitable. Even the oldest
residents of the worst affected areas couldn't recall winds so strong, or destruction on so great a scale.
The highest wind speed reported was an estimated 119 knots (61 m/s) in a gust soon after midnight
at Quimper coastguard station on the coast of Brittany (48° 02' N 4° 44' W).
The highest measured wind speed was a gust of 117 knots (60 m/s) at 12.30 am at Pointe du Roc
(48° 51' N, 1° 37' W) near Granville, Normandy.
The strongest gust over the UK was 100 knots at Shoreham on the Sussex coast at 3.10 am, and
gusts of more than 90 knots were recorded at several other coastal locations.
Even well inland, gusts exceeded 80 knots. The London Weather Centre recorded 82 knots at 2.50
am, and 86 knots was recorded at Gatwick Airport at 4.30 am (the authorities closed the airport).
A hurricane or not?
TV weather presenter Michael Fish will long be remembered for telling viewers there would be no
hurricane on the evening before the storm struck. He was unlucky, however, as he was talking about a
‗different storm system‘ over the western part of the North Atlantic Ocean that day. This storm, he said,
would not reach the British Isles — and it didn't. It was the rapidly
deepening depression from the Bay of Biscay which struck. This storm
wasn't officially a hurricane as it did not originate in the tropics — but
it was certainly exceptional. In the Beaufort scale of wind force,
Hurricane Force (Force 12) is defined as a wind of 64 knots or more,
sustained over a period of at least 10 minutes. Gusts, which are
comparatively short-lived (but cause a lot of destruction) are not taken
into account. By this definition, Hurricane Force winds occurred locally
but were not widespread. The highest hourly-mean speed recorded in
the UK was 75 knots, at the Royal Sovereign Lighthouse. Winds reached
Force 11 (56–63 knots) in many coastal regions of south-east England.
Inland, however, their strength was considerably less. At the London Weather Centre, for example,
the mean wind speed did not exceed 44 knots (Force 9). At Gatwick Airport, it never exceeded 34
knots (Force 8).
The powerful winds experienced in the south of England during this storm are deemed a once in 200
year event — meaning they were so unusually strong you could only expect this to happen every two
centuries. This storm was compared with one in 1703, also known as a 'great storm', and this could be
justified. The storm of 1987 was remarkable for its ferocity, and affected much the same area of the
UK as its 1703 counterpart.
Northern Scotland is much closer to the main storm tracks of the Atlantic than south-east England.
Storms as severe as October 1987 can be expected there far more frequently than once in 200 years.
Over the Hebrides, Orkney and Shetland, winds as strong as those which blew across south-east
England in October 1987 can be expected once every 30 to 40 years. The 1987 storm was also
remarkable for the temperature changes that accompanied it. In a five-hour period, increases of more
than 6 °C per hour were recorded at many places south of a line from Dorset to Norfolk.
The aftermath
Media reports accused the Met Office of failing to forecast the storm correctly. Repeatedly, they
returned to the statement by Michael Fish that there would be no hurricane — which there hadn't been.
It did not matter that the Met Office forecasters had, for several days before the storm, been
27
warning of severe weather!
The Met Office had performed no worse than any other European
forecasters when faced with this exceptional weather event.
However, good was to come of this situation. Based on the findings of an internal Met Office enquiry,
scrutinised by two independent assessors, various improvements were made. For example, observational
coverage of the atmosphere over the ocean to the south and west of the UK was improved by increasing
the quality and quantity of observations from ships, aircraft, buoys and satellites, while refinements
were made to the computer models used in forecasting. Some argue that the reason that Fish failed to
spot the approach of the storm was due to a reduction in funding and subsequent removal of a weather
boat out in the Bay of Biscay! So you could argue that he was allowed to become a scape-goat!
5. The Climate of the Wet/Dry Savannas
(a) Description of the climate characteristics of the Tropical Continental Climate
The tropical continental type of climate in West Africa occurs mainly in those areas which are situated
between equatorial and the hot desert climate types. For part of the year these areas lie under the
influence of the dry trade winds, but for the rest of the year they are invaded by the belt of
convectional rains. Consequently there is an alternation of wet and dry seasons. Because the tropical
continental is essentially a transitional climate between tropical rain forests and the hot deserts,
variations occur with increasing latitude from the equator. The length of the dry season and unreliability
of precipitation increases pole wards. The main characteristic features of the tropical continental
climate are:
Temperatures are high throughout the year, although
annual and daily ranges tend to be larger than the
equatorial type but not as high as the hot desert climate.
At the equatorial/rain forest margins temperature ranges
from 22 to 28°C over a year and at the hot desert margins
temperature ranges from 18 to 34°C over a year.
The rainfall is highly seasonal in its distribution. The
bulk of the rain falls during the summer months (May, June,
July & August), and the rest of the year is very dry (i.e. the
winter months). With increasing distance from the equator,
the rainfall decreases in amount, and the dry season
becomes longer and more severe. At the equatorial/rain
forest margins precipitation is over 1000mm a year with
only 1 dry month and at the hot desert margins
precipitation is less than 500mm a year, with 9 or 10 dry
months.
Both relative humidity and the amount of cloud cover
vary with the season, being generally high during the rains
and much lower during the dry season.
The wind patterns also change with the seasons. In
West Africa, for example, warm, moist winds blow from the
south in the summer months and warm, dry winds blow form
the north in the winter season.
Climate Data
Atar, Mauritania
J
F
M
A
M
J
J
A
S
O
N
D
28
Temp oC
21
23
25
29
32
35
34
34
34
31
25
21
Rainfall
(mm)
3
0
0
0
0
3
8
31
28
3
3
0
Description of the Atar climate graph
Average annual temperature: 29°C
Average annual range: 21 to 35 = 14°C
Highest temperatures: summer (June, July, August, Sept)
Total Rainfall: 79mm/year
Dry season (< 50mm): 12 months
Wet season (> 50mm): 0 months
Maximum rainfall: single peak in August
The hot desert climate is experienced in West Africa to the north of about 18°N. In this region the
rainfall is very light, everywhere averaging less than 250mm a year. It is also highly irregular in its
occurrence. Away from the coast temperatures are more extreme than in any other part of West
Africa, with particularly large daily (diurnal) ranges being experienced.
Calabar, Nigeria
J
F
M
A
M
J
J
A
S
O
N
D
Temp oC
27
27
28
27
27
27
25
25
26
26
27
27
Rainfall
(mm)
43
76
152
213
312
406
450
406
427
310
191
43
29
Description of the Calabar climate graph
Average annual temperature: 27°C
Average annual range: 25 to 28 = 3°C
Highest temperatures: winter (Nov, Dec, Jan, Feb, March, April)
Total Rainfall: 3029mm/year
Dry season (< 50mm): 2 months (winter)
Wet season (> 50mm): 10 months (summer)
Maximum rainfall: double peak in June and September
Kano, Nigeria
J
F
M
A
M
J
J
A
S
O
N
D
Temp oC
26
27
30
34
33
30
28
26
25
28
26
26
Rainfall
(mm)
0
5
10
15
75
125
220
265
150
25
0
0
30
Description of the Kano climate graph
Average annual temperature: 28°C
Average annual range: 25 to 34 = 9°C
Highest temperatures: at end of dry season (April & May)
Total Rainfall: 1040mm/year
Dry season (< 50mm): 7 months in the winter (Oct to April)
Wet season (> 50mm): 5 months in the summer (May to Sept)
Maximum rainfall: single peak in August (wet season)
(b) Explanation of the Tropical Continental Climate of West Africa:
‗The role of subtropical anticyclones and the inter-tropical convergence zone (ITCZ)‘
1. Temperature
In West Africa temperatures are high throughout the year due to the sun being overhead for many
months. The angle of the sun affects the amount of atmosphere the sun's radiation has to pass through.
This then determines the amount of radiation that reaches the earth's surface. When the sun is
directly overhead and radiation travels through least amount of atmosphere enroute to the earth's
surface temperatures are higher. However, there is a short cooler season (in comparison with the
equatorial climate) in tropical continental areas during the summer months due to two factors. Firstly,
this is the time of maximum rainfall and the increased cloud cover reduces incoming solar radiation.
Secondly, during the summer months the sun is not directly overhead in the northern hemisphere (it is
overhead further north at the Tropic of Cancer in the Northern Hemisphere, June 21st). This increases
the amount of atmosphere that the sun's radiation has to travel through reducing the amount of heating
and also increases the land area being heated as the insolation is spread over a larger area.
2. Wind
The seasonal wind patterns in West Africa (warm, moist winds blow from the south in the summer
months and warm, dry winds blow form the north in the winter season) are influenced by the tropical
continental (Tc) and tropical maritime (Tm) air masses.
31
3. Rainfall
Rainfall is the most important element in the climate of West Africa. The amount and seasonal
distribution of rainfall in West Africa is largely determined by fluctuations in the position of two
important air masses and their associated wind systems.
The tropical continental air mass (cT or Tc) originates over the Sahara Desert, and consequently is warm
and dry. Associated with the Tc air mass are easterly or north-easterly winds, which in West Africa are
known as the Harmattan, which have a drying influence on the areas over which they pass.
The tropical maritime air mass (mT or Tm) originates over the Atlantic Ocean to the south of the
equator, and consequently is warm and moist. Associated with the Tm air mass are moisture-laden winds
called the Southwest Monsoon.
Wet Season in Summer Months
The migration polewards of the ITCZ during the summer months brings rainfall to this area
of West Africa giving it a wet season (e.g. Kano in Nigeria). Rainfall results from the moist
unstable Tm air brought in by the Southwest Monsoon winds from the Atlantic Ocean.
Areas at the poleward limit of ITCZ movement are only briefly affected (i.e. latitude
20°N), thus they have only a brief wet season and low annual rainfall totals (e.g. Atar in
Mauritania).
Nearer the equator the wet season lasts whilst the ITCZ is poleward, and the area is under
the influence of the Tm air mass brought in by the Southwest Monsoon winds. Maximum
rainfall occurs with the passage polewards of the ITCZ and on its return, thus giving a
double maxima in some areas (e.g. Calabar in Nigeria).
32
The diagram shows what is happening in cross-sectional view.
The winter months are the dry season in West Africa. During the winter the ITCZ retreats southward,
and in January is situated just to the north of the Gulf of Guinea coast at latitude 5°N. As a result, the
influence of the Southwest Monsoon winds is restricted to that part of West Africa which lies to the
south of latitude 5°N. The remainder of West Africa in January lies under the drying influence of the
Harmattan, and consequently receives very little rainfall at that time of year.
Dry Season in Winter Months
The migration equator wards of
the ITCZ during the winter
months brings the hot, dry desert
Harmattan winds across this area
of West Africa giving it a dry
season (e.g. Kano in Nigeria).
Rainfall is restricted to the far
south of the area and results
from the moist unstable Tm air
brought in by the Southwest
Monsoon winds from the Atlantic
Ocean (e.g. Calabar in Nigeria).
Areas way to the north have the
longest dry season due to the
drying influence of the Harmattan
most of the year and due to the
fact that the moist Southwest
Monsoon winds are pushed back
way to the south (e.g. Atar in
Mauritania)..
The diagram shows what is happening in cross-sectional view
So, the alternating wet and dry seasons in West Africa are directly related to the positions of the Tc
and Tm air masses. When the Tm air mass moves over the area (in the summer months) it is hot and wet,
33
with winds blowing from the south, and when the Tc air mass moves over the region (in the winter
months) it is hot and dry, with the winds blowing from the north. But what causes these two air masses
to move over the region at different times?
Surface winds always blow from areas of high pressure to areas of low pressure. As you can see from
the surface pressure map below a band of high pressure can be seen along the line of the Tropic of
Capricorn (23.5 S) and another just north of the Tropic of Cancer (23.5 N), these pressure systems are
the subtropical highs. In between these two areas of high pressure is an area of low pressure occurring
roughly along the line of the equator, hence generally termed equatorial low pressure. The arrows on the
pressure map indicate the relative movement of the surface winds, and over West Africa that means the
winds blow in towards the equator. The winds blowing from the high pressure in the north are part of
the Tropical continental air mass (Tc) and the winds blowing from the south are part of the Tropical
maritime air mass (Tm).
The low pressure
area
where
these two air
masses (Tc &
Tm) meet
is
known as the
Inter-Tropical
Convergence
Zone
(ITCZ).
The
pressure
map below shows
where the ITCZ
is located.
The location
of the ITCZ
is linked to
intense
heating
by
the overhead
sun. Over the
equatorial
regions the
sun
is
directly
overhead for
most of the
year
and
heats
this
area up more
than
any
other.
This intense heating of the land surface causes the air directly above to be warmed up, causing it to
eventually rise. Rising air forms a low pressure area beneath it. This is the low pressure that the surface
winds flow into from north and south. However, as you can see the position of the ITCZ is not
34
stationary, but fluctuates slowly throughout the year, following with a lag of a month or two, the
apparent movement of the overhead sun. This can be seen in the pressure map below. In this map for
July the ITCZ is much further south than in the January pressure map above.
So the seasonal wind
patterns,
variations
in temperature and
the alternate wet and
dry seasons of the
tropical
continental
climate are all caused
by the influence of
the Tc and Tm air
masses. These two air
masses
move
over
west Africa due to
the way air moves
from areas of high
pressure to areas of
low pressure. In the
winter
the
low
pressure is situated
over the equator or south of the equator, causing the dry Tc winds from the Sahara to influence West
Africa. In the summer months the low pressure area is situated further north (about 20°N) towards to
Tropic of Cancer, allowing the wet Tm winds from the Atlantic to influence West Africa. The area of low
pressure where the winds converge form the surrounding high pressure system is known as the Intertropical convergence zone (ITCZ). The ITCZ is associated with the heating from the overhead sun.
Intense heating from the overhead sun heats the ground up causing convectional uplift to occur. This
uplift causes a low pressure area to develop at the ITCZ and follows the movement of the sun through
the year. So finally why does the sun seem to move through the year?
It is the tilt of the Earth (23.5°) on its axis which causes the position of the overhead sun to move
during the year. As the Earth circles the sun for one half of the year the northern hemisphere is tilted
towards the sun giving it its summer months. For the other half of the year the northern hemisphere is
tilted away from the sun giving it its winter. The net effect is that the overhead sun seems to move
from the Tropic of Cancer to the Tropic of Capricorn over a year. See the diagram below. This causes
the pressure belts to move which in turn creates transitional climates with seasonal rainfall patterns
such as the tropical continental climate of West Africa.
All of the discussion above led us finally to set the tropical continental climate into a global context. The
various climates found on the Earth are all related to the global atmospheric circulation system. Below is
much simplified diagram to show you how the whole system works!
35
6. Tropical Revolving Storms (Hurricanes)
1. Origin of Hurricane Hazard
The ingredients for a hurricane include a pre-existing weather disturbance, warm tropical oceans
(>26oC), moisture, and relatively light winds aloft. If the right conditions persist long enough, they can
combine to produce the violent winds, incredible waves, torrential rains, and floods we associate with
this phenomenon.
A hurricane is a type of tropical cyclone, which is a
generic term for a low pressure system that generally
forms in the tropics 5-15oN or S as in this location the
effect of the descending limb of the Hadley Cell causing
sub-tropical high pressure is weaker, therefore
encouraging evaporation. The cyclone is accompanied by
thunderstorms and, in the Northern Hemisphere, a
counter-clockwise (cyclonic) circulation of winds near
the earth's surface (and anti-cyclonic outflow in the
upper atmosphere). They range from 200 to 600km in
diameter and can cover an area of 1,300,000 km², and
last for a few weeks. They occur on the Western side of
ocean basins and track westwards until they hit landfall
where their energy is dissipated. It is the effect of the
Coriolis Force that causes them to begin to spiral in a
cyclonic direction.
In terms of their potential for destruction, hurricanes are the world's most violent storms. The amount
of energy produced by a typical hurricane in just a single day is enough to supply all of the USA's
electrical needs for 6 months!
36
Tropical cyclones develop through a range of weather systems:
Tropical Disturbance
This is the initial mass of thunderstorms which has only a slight wind circulation. Although many
tropical disturbances occur each year, only a few develop into true huricanes.
Tropical Depression
An organized system of clouds and thunderstorms with a defined surface circulation and
maximum sustained winds of between 23 and 38 mph.
Tropical Storm
An organized system of strong thunderstorms with a defined surface circulation and maximum
sustained winds of 39-73 mph.
Tropical Cyclone (Hurricane)
An intense tropical weather system of strong thunderstorms with a well-defined surface
circulation and maximum sustained winds of 74 mph (118 km/h) or higher.
When the winds from these storms reach 39 mph (34 kts), the cyclones are given names. Years ago, an
international committee developed names for Atlantic cyclones. In 1979 a six year rotating list of
Atlantic storm names was adopted — alternating between male and female hurricane names. Storm
names are used to facilitate geographic referencing, for warning services, for legal issues, and to reduce
confusion when two or more tropical cyclones occur at the same time. Through a vote of the World
Meteorological Organization Region IV Subcommittee, Atlantic cyclone names are retired usually when
hurricanes result in substantial damage or death or for other special circumstances. The names assigned
for the period between 2003 and 2009 are shown below.
Names for Atlantic Basin Tropical Cyclones
2003
Ana
Bill
Claudette
Danny
Erika
Fabian
Grace
Henri
Isabel
Juan
Kate
Larry
Mindy
Nicholas
Odette
Peter
Rose
Sam
Teresa
Victor
Wanda
2004
Alex
Bonnie
Charley
Danielle
Earl
Frances
Gaston
Hermine
Ivan
Jeanne
Karl Lisa
Matthew
Nicole
Otto
Paula
Richard
Shary
Tomas
Virginie
Walter
2005
Arlene
Bret
Cindy
Dennis
Emily
Franklin
Gert
Harvey
Irene
Jose
Katrina
Lee
Maria
Nate
Ophelia
Philippe
Rita
Stan
Tammy
Vince
Wilma
2006
Alberto
Beryl
Chris
Debby
Ernesto
Florence
Gordon
Helene
Isaac
Joyce
Kirk
Leslie
Michael
Nadine
Oscar
Patty
Rafael
Sandy
Tony
Valerie
William
2007
*Allison
Barry
Chantal
Dean
Erin
Felix
Gabrielle
Humberto
Iris
Jerry
Karen
Lorenzo
Michelle
Noel
Olga
Pablo
Rebekah
Sebastien
Tanya Van
Wendy
2008
Arthur
Bertha
Cristobal
Dolly
Edouard
Fay
Gustav
Hanna
Ike
Josephine
Kyle
Laura
Marco
Nana
Omar
Paloma
Rene
Sally
Teddy
Vicky
Wilfred
2009
Ana
Bill
Claudette
Danny
Erika
Fred
Kate
Larry
Mindy
Nicholas
Odette
Peter
Rose
Sam
Teresa
Victor
Wanda
For every year, there is a pre-approved list of names for tropical storms and hurricanes. These lists
have been generated by the National Hurricane Center since 1953. At first, the lists consisted of only
female names; however, since 1979, the lists alternate between male and female.
37
Hurricanes are named alphabetically from the list in chronological order. Thus the first tropical storm or
hurricane of the year has a name that begins with "A" and the second is given the name that begins with
"B." The lists contain names that begin from A to W, but exclude names that begin with a "Q" or "U."
There are six lists that continue to rotate. The lists only change when there is a hurricane that is so
devastating, the name is retired and another name replaces it.
If we're unlucky enough to deplete the year's supply of names we won't, contrary to popular opinion,
simply start using names from next year's list. In that case, the National Hurricane Center will turn to
the Greek alphabet and we'll have Hurricanes Alpha, Beta, Gamma, Delta, etc.
2. Hurricane Distribution
The map below shows the main zones of tropical cyclone formation and their local names. Worldwide,
their spatial distribution is not even. Hurricanes are generated between 5° and 20° either side of the
equator. They are the end product of a range of weather systems that can develop in the tropics. All
involve into areas of low pressure into which warm air is drawn.
Tropical cyclones do not occur all year round, but in distinctive seasons when the conditions necessary
for their formation occur. The season of tropical cyclone occurence is related to the movement of the
ITCZ with the overhead sun between the Tropics. Tropical cyclones occur during the summer season in
each hemisphere.
Certain factors seem important in the formation of tropical cyclone.
A location over seas with surface temperatures in excess of 26°C. This provides the initial heat
energy, the moisture to power intense condensation and convection, and a friction-free surface
to allow the continuous supply of warm, moist air.
A location at least 5° N/S of the equator. This allows for sufficient spin from the Earth's
rotation to trigger the vicious spiral in the centre of the hurricane.
A location on the western side of the oceans where descending air from the subtropical high is
weaker, allowing large scale upward convection to occur.
The presence of upper air high pressure. This ensures that air is sucked into the hurricane
system, causing a rapid uplift, huge volumes of condensation and massive clouds.
38
3. Scale of Hurricanes
Hurricanes are categorized according to the strength of their winds using the Saffir-Simpson
Hurricane Scale. A Category 1 storm has the lowest wind speeds, while a Category 5 hurricane has the
strongest. These are relative terms, because lower category storms can sometimes inflict greater
damage than higher category storms, depending on where they strike and the particular hazards they
bring. In fact, tropical storms can also produce significant damage and loss of life, mainly due to
flooding.
Scale
Wind Speed
Pressure
Storm Surge
1
118 - 153 kph
>980 mb
1.2 - 1.6 m
2
154 - 177 kph
965 - 979 mb
1.7 - 2.5 m
3
178 - 209 kph
945 - 964 mb
2.6 - 3.8 m
4
210 - 249 kph
920 - 944 mb
3.9 - 5.5 m
5
250 + kph
<920 mb
> 5.5 m
Damage Potential
Minimal: Damage to vegetation and poorly anchored
mobile homes. Some low-lying coasts flooded. Solid
buildings and structures unlikely to be damaged.
Moderate: Trees stripped of foliage and some
trees blown down. Major damage to mobile homes.
Damage to some roofing materials. Coastal roads
and escape routes flooded 2-4 hours before
cyclone centre arrives. Piers damaged and small
unprotected craft torn loose. Some evacuation of
coastal areas is necessary.
Extensive: Foliage stripped from trees and many
blown down. Great damage to roofing materials,
doors and windows. Some small buildings
structurally damaged. Large structures may be
damaged by floating debris. Serious coastal
flooding and escape routes cut off 3-5 hours
before cyclone centre arrives. Evacuation of
coastal residents for several blocks inland may be
necessary.
Extreme: Trees and signs all blown down. Extensive
damage to roofing, doors and windows. Many roofs
of smaller buildings ripped off and mobile homes
destroyed. Extensive damage to lower floors of
buildings near the coast. Evacuation of areas
within 500m of coast may be necessary and lowlying areas up to 10km inland. Major erosion of
beaches.
Catastrophic: Complete roof failure of many
residential and industrial buildings. Major damage
to lower floors of all structures lower than 3m
above sea level. Evacuation of all residential areas
on low ground within 16-24km of coast likely.
4. Effects of Hurricanes
Hurricanes have major impacts both on people and on the physical environment. These effects can be
split up into the following:
Physical Environment
High winds which destroy trees.
Tidal surge which causes flooding of low land near coast and eco-system damage in oceans.
Hugh rainfall causes flooding and landslides which remove vegetation & reduce slope angles.
Built Environment
Loss of communications (roads, elevated highways, railways, bridges, electricity lines).
Loss of homes
Loss of industrial buildings/facilities
39
Human Environment
Death and injury
Destruction of homes causing homelessness, refugees.
Loss of factories/industry causing a loss of livlihood and unemployment.
Loss of communications hindering rescue/emergency services and rebuilding/rehabilitation.
5. Factors Affecting Damage by Hurricanes
The effect of a hurricane on a community depends on a number of factors such such as physical factors,
economic factors and political factors:
Physical Factors
The intensity of the hurricane (measured from 1 to 5 on the Saffir Simpson Hurricane scale)
will be a major influence.
The hurricane intensity will be modified by the distance to the path of the hurricane, known as
the storm corridor. Destruction is significantly higher along the storm corridor.
There is also a relationship between distance from the sea and the amount of damage because
the hurricane dies as it moves inland.
Whether a settlement lies on the right or left of a hurricane's path can influence the
destruction caused. The hurricane's travel speed (perhaps 50 kph) is therefore added to the
windspeeds on the right of its path but subtracted from those on its left. This can result in a 96
kph difference in windspeed depending on which side of the storm centre you lie.
The travel speed of a hurricane also determines how long the hurricane takes to leave a
location. Hurricanes usually nudge their vicious wind circulations along at a leisurely 6-50 kph.
Slower moving hurricane systems can cause more damage because the destructive winds take
longer to move on.
High relief will exaggerate already high hurricane rainfall levels. Flooding, arising from these
high rainfalls, is often a major component of hurricane deaths and damage. Landslides are equally
dangerous in areas of high relief. On the other hand low relief will make a region more vunerable
to storm surges. Coastal flooding can be the main killer in a hurricane. The storm surge and
subsequent flooding is the greatest hazard to the people of Bangladesh since much of the
country is low-lying delta and floodplain, which is densely populated agricultural land. Most of the
coastal area is below 3m, where large numbers of people live on unstable sandbanks in the deltas.
An illustration of how physical factors can make the damage caused by a hurricane worse can be seen in
the case of Hurricane Mitch in 1998. Hurricanes, therefore, represent a mixture of hazards. Once the
hurricane strikes, damage, death and destruction may depend more on economic and political factors
than anything else!
Economic Factors
Economically the patterns of death and damage are related to the stage of development of the affected
nation. Poorer countries suffer because building codes, warning systems, defences, emergency service
and communications infrastructure may be inadequate, resulting in high death tolls. Wealthier countries
stand a better chance of evacuating people in time but have more to lose in simple material terms , so
therefore suffer greater economic losses!
Event
Cyclone Gorky (Bangladesh)
Hurricane Andrew (USA)
Date
May 1991
August
1992
Windspeed
232 kph
Death Toll
131,000
Damage ($US)
1.7 billion
264 kph
60
20 billion
40
The death tolls illustrate the huge differences between the vulnerability of American and Bangladeshi
citizens but the damage totals can be misleading. The raw figures suggest that the USA suffered more
damage because of the higher costs incurred but this is not true. The USA suffered a higher monetary
cost because the buildings, contents and infrastructure had a higher monetary value.
However, money is not a reliable measure since the loss of a home has a big impact on the family
whatever it cost. Indeed the loss of an American home worth $150,000 (and covered by insurance) may
be less significant than a tin and wood shack on the Ganges delta that represents years of irreplaceable
(and uninsured) savings.
An illustration of how economic factors can make the damage caused by a hurricane worse can be seen in
the case of Hurricane Mitch in 1998.
Political Factors
Political factors influence the underlying causes of poverty and vulnerability, but it is not simply national
politics and priorities which are to blame. International relationships are also responsible as shown in the
1988 Hurricane Gilbert in Jamaica. Prior to Hurricane Gilbert in 1988, Jamaica was already in debt partly as a result of previous hurricane damage. The high interest repayments on the debts saw the
Jamaican Government attempting to improve their economy by cutting public spending and reducing
inflation (by raising interest rates) to lower prices to encourage people to spend and make industry
more competitive. The increased interest rates reduced profits in the construction industry and
houses were built cheaply and shoddily. Cutbacks in health budgets reduced nutritional levels in a
country where more than 30% of the population live in poverty, more than 50% of women of childbearing
age are anaemic and 50,000 children under five are malnourished.
The combination of declining building standards and decreasing healthcare served to increase
Jamaica's vulnerability. Hurricane Gilbert came along and devastated the island in 1988, causing huge
losses to Jamaica's economy, estimated at some US $7 billion. This further increased Jamaica's debt,
so the Government is now looking at the possibility of mining peat from Jamaica's coastal wetlands to
provide a cheap fuel source. This will help the balance of payments and, economically, makes sense.
Unfortunately, it would also remove the first line of defence against hurricane surges. To pay for
repairs from the last hurricane it seems Jamaica has to increase its vulnerability to the next - a very
vicious "vicious circle". If the burden of Third World debt could be reduced, LEDC's could increase their
"disaster resistance" by focusing investment on development schemes aimed to improve the welfare of
the rural poor.
The politics of war can also have an effect. In the case of Nicaragua in 1988 the long guerilla war and
US sanctions increased the impact of hurricanes, reducing the country's ability to cope. An illustration
of how political factors can make the damage caused by a hurricane worse can be seen in the case of
Hurricane Mitch in 1998.
6. Hurricane Management
Although hurricanes are neither the biggest nor the most violent storms experienced on the surface of
the Earth, they combine these two characteristics to become amongst the most destructive. During the
peak hurricane season in the northern hemisphere (1 June- 1 November) hurricanes pose a major
threat to human life, agriculture and assets both at sea and on land. The area of the Atlantic at
greatest risk from damage caused by hurricanes is between 10 and 30 degrees latitude. However,
degraded storms may travel back across the Atlantic and cause serious wind damage or flooding in
Europe. Since hurricanes have such a dramatic impact on human life, it is not surprising that people have
invested time and money in trying to predict their development, path and intensity. Prior to 1898, when
the first warning system was established, hurricanes arrived on land unannounced, resulting in enormous
41
damage and the tragic loss of life. Since then, methods of tracking and prediction have improved
enormously, notably with the introduction of air reconnaissance flights in 1944, the use of radar
technology and satellite imagery in 1960, and the introduction of computer-assisted modelling
techniques.
Today, one of the most elaborate and comprehensive hurricane warning services is funded by the US
government and located in Miami, Florida. At the National Hurricane Centre five hurricane specialists
are employed to monitor the path and development of hurricanes in the Atlantic and Eastern Pacific
Oceans. The specialists work shifts during the peak hurricane season to ensure 24 hour coverage of any
storm activity.
Methods of hurricane tracking and prediction
Satellite imagery forms the main source of information used by the hurricane specialists. Geostationary
satellites (predominantly GOES-7 and METEOSAT 3 and 4) supply images to the Centre at half hour
intervals. These are initially interpreted by satellite analysis teams, including the Satellite Analysis
Branch (SAB) located at the National Meteorological Center (NMC) in Washington, and the Tropical
Satellite Analysis Forecasting (TSAF) group located in Miami. The teams analyse images in the visible
and infra-red wavebands of the electromagnetic spectrum to produce an estimate of each individual
storm's location and intensity. Any revolving cloud activity located between 0° and 140° W is also
brought to the attention of the specialist on duty, as such systems often represent embryo tropical
cyclones.
The independent analysis teams may produce different estimates of the location of the storm centre,
especially if the centre is poorly defined.
The hurricane specialist therefore has to co-ordinate the teams' information sets with a wide variety of
surface data in order to produce the most accurate estimate for the location of the storm.
This is particularly important because the computer models rely heavily on the accuracy of the location
estimate to produce reliable forecasts.
Satellite image used by the National Hurricane Centre
to monitor Atlantic hurricanes
The sources of surface information include measurements of
precipitation, wind and pressure characteristics available
from land and sea-based permanent recording centres, and
naval and commercial shipping in the immediate vicinity of
the storm. The availability of such measurements depends
heavily on the location and intensity of the storm,
particularly if it presents a threat to shipping. If the
surface data sources are limited and the storm is likely to
present a threat to land (increasing the need for reliable
track predictions) the specialist on duty may authorise a
reconnaissance flight through the centre of the storm.
The role of reconnaissance flights
In a reconnaissance flight a WC 130 aircraft equipped to make accurate measurements of wind speed
and direction, temperature, dew point and pressure is flown at 24,000 feet (approximately 7.3 km)
through the storm several times. This procedure, though safe, requires a team of six highly qualified
experts and a great deal of initial capital input. In addition it has been estimated that each hour of a
reconnaissance flight costs around $2,500 US dollars, and an estimated 10-12 flying hours are required
per mission. For this reason, only when storms present a major threat to US assets or Caribbean
42
countries, justifying the benefit of accurate information on the nature and location of a storm, can a
reconnaissance flight be' authorised. The higher number of reconnaissance flights flown through
Atlantic storms is one of the reasons why they are more accurately predicted than Pacific storms.
The role of computer technology
All the available data on a particular storm are entered on to a computer and can be compiled as a single
image. The specialist can manipulate the image by enhancing the scale and definition, or adding colour,
degrees of latitude and longitude, and coastlines.
Computer generated maps allow a storm's path to be tracked and predicted
Using all the data an estimate of the
initial position of the storm (or
position at the time of the final
satellite image) and the
actual
position of the storm (or location at
the time of broadcast) is made.
These data are then entered into a
database and transmitted to the
National Meteorological Center (NMC)
in Washington. Here meteorological
data from stations all over the world
are compiled and the information is
used to run climate simulation models
on the powerful 'Cray' computer.
Approximately
12
models
are
currently used including statistical,
dynamical and statistical-dynamical
models.
Statistical models such as CLIPER are based on the idea that the track of a storm under a given set of
meteorological conditions is repeatable. Thus by comparing the location of the storm and the
surrounding atmospheric conditions with the actual tracks of storm activity recorded over the last 50
years an estimate of the forecast track is provided. The reliability of this model is not only affected by
statistical limitations but also by the presumed accuracy of storm activity records in the past. However,
statistical models do have the advantage of requiring only limited computer power to run, and producing
results within seconds.
Dynamical models are based on far more detailed and complex ideas about the climate of the Earth.
These mathematical models assume the Earth is a rotating sphere surrounded by a gas (representing the
atmosphere). Data are collected from all over the world and used to predict global atmospheric
conditions. To predict the development and track of tropical cyclones a spiralling vortex is inserted into
a model of predicted atmospheric conditions - rather like a spinning cork in a basin. Dynamical models are
considerably superior to statistical models but they too have their limitations. For example, the process
of data collection takes approximately 2 hours, and a further hour is required to run the model.
The predictions are therefore not received by the specialists in time for the production of the
'immediate' package and have to be used in the following shift forecast, by which time the information
is comparatively out of date. As with the statistical models, the forecast is only as accurate as the data
input, and a regular compatible supply of data is required from Third as well as First World countries.
The accuracy of the forecast therefore varies on a daily basis and a simple check of the reliability of
the forecast can be made by comparing the predicted atmospheric conditions with the actual
atmospheric conditions of the previous day.
43
Dynamical-statistical models are the most complex and potentially the most accurate models,
incorporating the principles of both the above types of model. However they are still in the early stages
of development.
The human element
All the separate model forecasts described above are collated on a single map. However, as a result of
the differing natures of the models and their limitations, the forecasts produced by each are often
different, and sometimes contradictory. The role of the specialist is to translate all the different
objective forecasts into a single subjective forecast. To do this he or she uses statistics which indicate
the reliability of each model in previous predictions, coupled with personal experience to weight each
particular model. The track and intensity forecast issued by the previous watch and the storm's
proximity to land must also be taken into consideration before predictions can be finalised.
When a tropical cyclone is threatening land the specialist must decide which section of the
coastline to place under watch or warning. This is done in conjunction with the predicted track forecast
and the local authorities who can provide advice about suitable breaks in the watch or warning zones.
The specialist must weigh up the advantages of over-preparing the population against the disadvantages
caused by under-preparing them. It may seem obvious that the specialist should err on the side of
caution - after all lives and consider- able property damage are at stake. However, the considerable
personal inconvenience and financial cost of evacuation must be taken into account!
For example, it has been estimated that the financial cost of evacuating a 500 km stretch of the US
coastline is approximately US$50 million (measured in terms of lost business and tourism, coupled with
the expenses of property protection and evacuation procedures). It is also vital to prevent repeated
unnecessary warnings lulling the population into a false sense of security and generating complacency.
The hurricane specialist must also define each storm and classify the intensity based on the SaffirSimpson scale. Although satellite images can be used to estimate the intensity of the storm using the
shape and patterns of cloud formation, and the various surface recording stations provide accurate
records of wind speed at a particular place and time, the conditions within a storm are, spatially and
temporally, highly variable. This means that the probability of actually measuring the highest wind speed
of a particular storm is remote. The difference therefore, between a 'very strong tropical storm' and a
'weak' hurricane is not only nominal but subjective, depending on the data available and the specialist on
duty. Whilst this has no significant direct impact on the public it does affect statistical records and
hence the statistical models.
Hurricane prediction: a success story?
Methods of hurricane prediction and tracking are constantly updated and improved. An important part of
the specialists' work is the detailed collection of data and model verification (checking the forecast
accuracy) for each individual storm. Verifications of some of the models are beginning to indicate that
they are producing superior forecasts to the subjective analyses. There is therefore a realistic hope of
producing accurate 72 hour forecasts in the near future. (Due to the chaotic nature of the climatic
system accurate forecasts beyond 72 hours are unlikely)
In terms of saving life the hurricane prediction service is an undoubted success. Since 1898 the death
toll has been continually and dramatically reduced due to the increased accuracy and efficiency of the
warning systems, and better education and communication facilities. The material damage caused by
hurricane activity, however, has dramatically increased in this period. This is predominantly because of
increased wealth and the concentration of material goods and people in zones at risk from hurricane
activity. A 1992 survey pointed out that 80-90% of the US coastal population had never experienced a
44
category 3 hurricane. This bred a false sense of security as the population did not comprehend, or
ignored, the risk of hurricane activity and took inadequate precautions and unnecessary risks.
The enormous damage caused by Hurricane Andrew in 1992 was largely a result of this complacency. As
the population density in coastal regions at risk from hurricane activity increases and the evacuation
systems are put under stress, there is a danger that in the future, death tolls from hurricanes may
again rise. Perhaps the challenge for the hurricane prediction service is to co-ordinate the increased
accuracy of prediction techniques with education programmes and improved safety and protection
measures to ensure the continued protection of life and property.
Hurricane Mitch, Central America (LDC‟s) 1998
On 22nd October 1998 a tropical storm formed in the Atlantic. Within 4 days the storm had grown to a
category 5 hurricane (Saffir Simpson scale) gusting at over 200 mph (320 kph). It remained a category
5 hurricane for 33 hours then the windspeeds began to fall as Mitch drifted towards Honduras. But wind
was not the problem with this monster. Mitch made landfall on the Honduras coast on 30th October.
Normally when a hurricane hits land it begins to die. The warm, moist oceanic air which drives the
hurricane's energy is replaced by dry continental air. Lack of moisture means lack of condensation - so
no more release of latent heat to drive the hurricane. Unfortunately a whole host of background causes
had reduced the vulnerability of this area, and so when the trigger event of Hurricane Mitch all hell
broke loose. A disaster waiting to happen!
Background Causes
1. Economic
This region has debts of $4 billion owing to the west. Every day Nicaragua and Honduras spend a
total of $2 million on repayments to creditors
Poverty has increased in both absolute and relative terms. Many towns had no storm drains.
2. Political
In a recent survey measuring perceptions of corruption among private business leaders in 89
countries around the word, Honduras was ranked third.
The natural reservoir of Laguna de Pescado on a tributary of the River Choluteca was formed
some years ago after a landslip blocked the river. The authorities never got round to removing it.
Communities were allowed to build on river banks and steep, unstable hill slopes.
3. Social
Tropical rainforest is disappearing from the Caribbean coast at a rate of 80,000 ha a year,
caused mostly by farmers burning trees to create farmland.
Towns grew rapidly as people migrated from rural areas for jobs.
Many thriving towns were situated on fertile farming area near the west coast of Honduras and
Nicaragua which was formed from volcanic ash. These volcanic soils are easily washed away.
4. Environmental
Many towns were situated in narrow, steep-sided mountain valleys.
Soils were saturated by weeks of wet weather.
Trigger Causes
45
Hurricane Mitch hit Honduras on 29th October 1998; it was a Storm 5 category hurricane. It
was the fourth fiercest in the Caribbean this century.
When the storm reached Central America it stalled for 2 days over the mountains of central
Honduras.
The mountains forced air to rise to 2,000m, cool and condense, then dump huge amounts of
moisture picked up from the sea.
Over 2 days about 40 cubic km of water fell on Honduras, and neighbouring areas of Nicaragua
and El Salvador.
Hurricane Katrina (MDC), 2005: Video Notes
Hurricane Pam: One year before Katrina Pam hits starting doomsday scenario planning for the next
major hurricane. New Orleans identified as being at most risk!
Baton Rouge, Louisiana July 2004:
New Orleans Levees damaged by floods
61 000 dead
380 000 injured and sick
½ million people homeless and ½ million buildings damaged
1 million people evacuated
Washington takes charge of relief effort.
responsibility.
Local, state and Federal Government now their
Occurrence:
The date, place and time of landfall were predicted
by the National Hurricane Center Miami. However, it
was still one of the most deadly hurricanes of modern
times!
August 24th 2005 Tropical thunderstorms in Atlantic
central Caribbean 38mph winds classified as a
tropical storm.
In Miami the national hurricane
centre predicts within 36 hours hurricane conditions
will affect south Florida.
Hurricane Katrina
eventually ranked as the 6th strongest hurricane of
all time. "Hurricane Katrina in 2005 was the largest
natural disaster in the history of the United States.
Preliminary damage estimates were well in excess of
$100 billion, eclipsing many times the damage
wrought by Hurricane Andrew in 1992.
Map of Central New Orleans
46
Thursday Aug 25th 2005
National Hurricane Center spot a mass of Tropical thunderstorms in the Atlantic with a counter
clockwise rotation – 33mph – Tropical storm
Miami – National hurricane centre – 36 hours hurricane conditions predicted for S. Florida. 74mph – a
category 1.
06:30 hits Florida Shore: $460 million damage, 14 dead, High winds – slow moving 8mph (weak but slow ½
the speed of a normal hurricane)
Lead time given in order to prepare response: Truck loads of material gathered – bottled water pop
tarts, flash lights. Warlmart – Bentonville, Arkanas set up emergency response team.
Advisory estimated within 36 hours
Shelters, feeding units in the west and gulf coast set up by red cross. Every response starts from the
bottom up. FEMA (Federal Emergency Management Association) becomes involved.
Friday 26th Aug
National Guard deployed
Oil Companies evacuated offshore
11:30 –Katrina Strengthens to a Cat 2 soon to be a 3 within 24hrs
Target of Katrina Florida Pan Handle of Louisiana (New Orleans).
New Orleans is built below sea level, crecsent shaped, gulf of
mexico 100 miles away, Mississippi runs through it ,Lake
Pontchartrain to the north. Some areas 6ft lower such as the 9th Ward, protected by earthen levees
and flood walls. Some walls and levees are sinking and in need of desperate repair, usually mainteained by
the US army corps.
5pm NW of Florida Quays  Target west of Florida pan handle, New Orleans will be hit within 72hours
People don’t fear hurricanes here as it is a common occurrence.
47
11pm Buras Louisiana 72 miles south of New Orleans is predicted to be place of landfall and will later be
proved to be accurate.
Saturday 27th Aug
FEMA wants to arrange distribution points with federal government. Katrina loses energy overland then
re-energise over the Gulf of Mexico, conditions greater than 26oc encourage this!
Category 3 - 115mph
Propels storms surge landwards. EVACUATION starts when storms is 30 hours away, mayor warns low
lying areas to evacuate!
FEMA was downgraded after 911 and taken out of the whitehouse and put into office of security. Many
blame this reason later for its slow response.
12ft waves – New Orleans flood gates close including those on the Industrial Canal , 17th St Canal and
the London Canal. They are only protected by walls 13-18ft
high.
Personal responses begin, people secure properties, board up
windows and stockpile food and water.
Saturday night in the French Quarter the bars are full – the
ultimate fatalistic approach to the hazard.
Sunday 28th Aug
worst case scenario – Katrina becomes a Cat 4. Cat 5 by 7am hit shore within 24 hours 125 miles wide.
Superdome a 70 000 seat stadium home of the New Orleans Saints football team acts as a refuge
centre. At 8 am Superdome takes in people. Mandatory evacuation still not ordered yet!
9.20 Bush Calls governor Blanco to discuss plan for evacuation.
only 20 hours to go! – Too late!
Mandatory evacuation ordered now with
6pm a curfew is imposed on people in the city of New Orleans.
10‘s of thousands not moving. Reguests to bring food and water to Superdome to last 5 days. State
insists on no drugs, guns or alcohol. 2.5 millions of bottles of water arrive one million MRE‘s (Meals Ready to
Eat) also arrive. 1 million now evacuated. Wait for buses to the super dome takes hours.
Winds now 200mph - Traffic on roads out snarls up heading inland due to high volumes of traffic. Gas
stations close and run out of fuel. Grocery stores run out of supplies. Some people decide to stay in
homes and stick it out!
Thousands of people have no cars to move out, approx 20% of the population. These same people don‘t
have money either as the storm hit at end of the month and their benefit cheques had already run out!
Its the poorest often the black population that will suffer the most in this type of disaster! Therefore
they have to be moved by bus, it a slow process. The poverty level in New Orleans is 23% double that
of national average. Murder rate in many wards is also higher than average. The deprived 9th Ward is
typical of this poverty and is 4ft below sea level.
President Bush still on vacation now declares a state of national emergency. State Search and rescue
teams (262 people) delayed! Due to FEMA bureaucracy in Washington DC Headquarters.
48
Mon 29th Aug
Katrina cat 3 or 4 weakens and leading edge hits towns on gulf coast.
4am winds drive a 14-17 ft storm surge inland.
5:10 Electricity lost including Superdome – only using back up generator. 100 000 without power
6:10 Katrina 60 miles east of New Orleans. Storms surge pushes up Mississippi River at the „funnel‟
where the intercoastal waterway meets the industrial canal to the east of this is the lower 9th ward and
the St Bernard Parish. Superdome roof damaged as winds rip away 12ft sections. Levees overtopped
Phone system down across city.
7:45 Lower 9th Ward Levees erode along the industrial canal! New Orleans east also 12 ft above sea
level. Levees on lake Ponchatrain also burst!
10:00 am Katrina NE at Bay St Louis, Waiverly, Port Cristiane and Gulf port on the state boarder
(lousiana/mississippi).
Biloa, Mississippi
Mobile Alabama –10 ft under water
Jackson Barrocks floridaPeople move to higher floors. FEMA has no emergency equipment as it us an
agency that relies on private contractors. This slows down the emergency response. General Honore
leads Millitary response. US military and Private contractors involved in relief effort (FEMA has no
actual equipment and relies on third parties such as these). TV phone radio and Satellite out, local
network also out.
Lake Ponchatrain overtopped by storm surge that overtops levees
London avenue and 17th St Canals full with water, London avenue
canal west side now fails and 17th St Canal east side fails and
covers the Western Parish now covered with 6-9ft of water.
Lower 9th ward is now a lake!
Entertainer Fats Domino now rescued.
1pm media report downtown ―Orleans missed the bullet but took a deadly punch‖. This was the message
Washington DC was receiving  how wrong! The Grand Casino on Gulf Port was moved 150 yards.
Thursday 1st Sept
80 percent of New Orleans damaged and under water
200 000 homes destroyed
249 police officers left their posts!
Many criticisms now voiced! No system in place early enough.
20 000 people in superdome, toilets back up and they are left with no food, no water, no medicine
1 million evacuated. FEMA did not know where the shelter points were and dis not knbow who needed
the help!
Wide spread looting in all areas – humanitarian effort delayed as order restored first.
Bureaucratic process also stops rescue teams from beginning work for 2 days even though they are
ready to go as paper work needed to be signed.
49
The armed corps tried to block uo the levees that failed with sandbags, however this failed.
pumps also failed to drain the city.
The
Reports of mass murder and gang rapes turn out to be incorrect and over exaggerated. Media also over
reposted some statistics. Still why is there no state control? Governor of Louisana, Kathleen Blanco
does not want to appear politically weak!
Local responses include a local bus company that takes 6 buses full of supplies to the disaster zone.
Generators don‘t arrive that were requested therefore can‘t pump out the city therefore delaying
relief effort. As the generators were not working the Superdome sewage could not be pumped out
causing a further medical disaster and spread of disease. The levees that were designed to save the city
actually keep the water in. Michael Brown FEMA director quits after Hurricane Katrina as he failed to
recognise Louisiana state as a problem area.
After the event President Bush admits that Federal, State and Local Governments were not
prepared and he takes full responsibility for the shortcomings.
The broken levees were repaired by engineers and the flood water in the streets of New Orleans took
several months to drain away. The broken levees and consequent flooding were largely responsible for
most of the deaths in New Orleans. One of the first challenges in the aftermath of the flooding was to
repair the broken levees. Vast quantities of materials, such as sandbags, were airlifted in by the army
and air force and the levees were eventually repaired and strengthened. The reopening of New Orleans
was delayed due to the landfall of Hurricane Rita.
Although the USA is one of the wealthiest developed countries in the world, it highlighted that when a
disaster is large enough, even very developed countries struggle to cope.
Hurricane Rita hits just a few weeks after Katrina and this time the government at all levels is
much better prepared.
Hurricane Rita was the fourth-most intense Atlantic hurricane ever recorded and the most intense
tropical cyclone ever observed in the Gulf of Mexico. Rita caused $11.3 billion in damage on the U.S. Gulf
Coast in September 2005.[1] Rita was the seventeenth named storm, tenth hurricane, fifth major
hurricane, and third Category 5 hurricane of the historic 2005 Atlantic hurricane season.
Rita made landfall on September 24 between Sabine Pass, Texas, and Johnsons Bayou, Louisiana, as a
Category 3 hurricane on the Saffir-Simpson Hurricane Scale. It continued on through parts of
southeast Texas. The storm surge caused extensive damage along the Louisiana and extreme
southeastern Texas coasts and destroyed some coastal communities. The storm killed seven people
directly; many others died in evacuations and from indirect effects.
Hurricane Katrina tracked over the Gulf of Mexico and hit New Orleans, a coastal city with huge areas
Summary of Impacts:
1,500 deaths in the states of Louisiana, Mississippi and Florida.
Costs of about $300 billion.
Thousands of homes and businesses destroyed.
Criminal gangs roamed the streets, looting homes and businesses and committing other crimes.
Thousands of jobs lost and millions of dollars in lost tax incomes.
Agricultural production was damaged by tornadoes and flooding. Cotton and sugar-cane crops
were flattened.
50
Three million people were left without electricity for over a week.
Tourism centres were badly affected.
A significant part of the USA oil refining capacity was disrupted after the storm due to flooded
refineries and broken pipelines, and several oil rigs in the Gulf were damaged.
Major highways were disrupted and some major road bridges were destroyed.
Many people have moved to live in other parts of the USA and many may never return to their
original homes.
51
7. Global Climate Change
‗The issue of climate change is one that we ignore at our own peril. There may still be
disputes about exactly how much we're contributing to the warming of the Earth's
atmosphere and how much is naturally occurring, but what we can be scientifically
certain of is that our continued use of fossil fuels is pushing us to a point of no return.
And unless we free ourselves from a dependence on these fossil fuels and chart a new
course on energy in this country, we are condemning future generations to global
catastrophe.‘ Barack Obama 2009.
Description of Climate Change since the Last Ice Age
The global climate has varied over geological
time considerably.
There has been colder
periods known as glacial and warmer periods
known as interglacials (see diagram right).
The climate of Britain has varied greatly over
the last 20 000 years. There has been a
gradual warming since the end of the last
Pleistocene Ice Age. The Pleistocene was an
epoch of cool glacial and warmer interglacial
periods which began about 2 million years ago
and ended in the British Isles about 11 500
years ago. Temperatures continued to rise
after the localised glacial re-advancement and
cooler conditions of the Younger Dryas (13 000-11 500 yrs BP.) and warmed to reach the climatic
optimum 6 000 years ago in the Atlantic Period and since then a gradual cooling leading to the "Little Ice
Age" (not a real Ice Age by the way) between 1500 & 1700 years BP. In the last 150 years, however,
there has been a rapid warming associated with the human enhanced greenhouse effect.
The climate graph below shows how climate in Britain has changed over the last 15 000 years since the
last Ice Age.
The current view held by the majority of climatologists is we are currently in a warmer period of time
known as an interglacial. One of the key questions climatologists hope to be able to answer is will there
52
be a new glacial period soon (geologically speaking), or will temperatures continue to rise in the
near future?
The phenomena of observed warming over the last 150 years referred to as recent global warming
appears to be linked to anthropogenic (man-made) increases in carbon dioxide levels associated with the
burning of so called fossil fuels, namely: coal, oil and natural gas which release carbon dioxide that has
been stored and subsequently locked away in the underlying strata for up to 350 million years. Indeed
evidence obtained by a team of Northern American Scientists calling themselves the Greenland Ice
Sheet Project 2 (GISP2) and their European counterparts the Greenland Ice Core Project (GRIP)
reveal temperature does indeed appear to increase when atmospheric CO 2 is high. Ice cores of up to
3km in depth taken from the Greenland Ice sheet have allowed trapped bubbles of air to give scientists
a proxy (estimation) of the climate: Firstly, the composition of stable isotopes of oxygen have been
analysed using mass spectrometry and the ratio of O16 to O18 calculated and compared to that of
Standard Mean Ocean Water (SMOW), which is the water from the deep oceans which is uniform in
composition to which all other ocean water samples are compared. You can think of it as comparing the
sample to a kind of ‗chemical zero‘ so it can be determined weather a sample is anomalously enriched or
deficient in heavier O18. If the air sample from the ice contains a high ratio of O16 to O18 then this
indicates a period of cooler conditions (See later in notes for how this works). CO 2 concentrations have
also been measured from air bubbles in parts per billion using mass spectrometry and concentrations of
high CO2 appear to match that of higher temperatures indicating that the two are linked (see graph
below)
Temperature calculated from ice core data plotted with C O2 Concentration for the last 150 thousand years
.
The graph below (although complicated looking) shows the melt years of the Greenland Ice Sheet over
the last 15 000 years compiled by the GISP2.
The dotted red line shows individual melting events
indicating a warmer climate and the black line is a mean or average showing the frequency of melting
events. You should notice that the melting events correspond to the periods of warmer temperatures
indicated in the table and graph we used in lessons. This is the evidence that supports the current
theory of temperature change.
Graph Shows GISP2 Ice Melt Periods over the last 15 000 years (Holocene)
53
If you want to read more about how climate has changed over the last 15 000 years please try
this link, it is a fascinating website! http://muller.lbl.gov/pages/IceAgeBook/history_of_climate.html
(a) Evidence for Climate Change
1. Historical Records
Historical records have been used to reconstruct climates dating back several thousands of years.
Historical data can be grouped into three major categories. First, there are observations of weather
phenomena, for example the frequency and timing of frosts, or the occurrence of snowfall.
Meteorological data, in the form of daily weather reports, are available for the British Isles from
1873 onward. Secondly, there are records of weather-dependent environmental phenomena, termed
parameteorological phenomena, such as droughts and floods. Finally, there are phenological records of
weather-dependent biological phenomena, such as the flowering of trees, or the migration of birds.
Major sources of historical palaeoclimate information include: ancient inscriptions; annals and chronicles;
government records; estate records; maritime and commercial records; diaries and correspondence;
scientific or quasi-scientific writings; and fragmented early instrumental records.
Much of this
historical evidence is fragmental or incomplete and therefore does not give us an entire archive of past
climate. In recent history a variety of events have been evidenced through historical records such as:
records of vineyards in Southern Britain dating back to 1600 years BP when climate was warmer than
today. Exeter University school of Archaeology and Geography has identified 7 Romano-British
vineyards 4 in Northamptonshire (then the Nene Valley) and one in Cambridgeshire, Lincolnshire and
Buckinghamshire respectively. Northamptonshire would have had a slightly warmer climate and was in
the lower end of the precipitation range meaning less fungal infections making grape growing conditions
favourable. Frost fairs were also held on the River Thames in Tudor times (1400-1600AD) indicating
colder climatic conditions, although the Old London bridge constricted the Thames‘ flow and may be in
part to blame for the easier onset of freezing than today. Indeed in 1683 the Thames froze for 2
months to a thickness of 11 inches. The last frost fair was held in 1814 and since then the Thames has
never completely frozen.
2. Ice Cores
As snow and ice accumulates on ice caps and sheets, it lays down a record of the environmental
conditions at the time of its formation. Information concerning these conditions can be extracted from
ice and snow that has survived the summer melt by physical and chemical means. Palaeoclimate
information during the Ice Age (last 130,000 years) has been obtained from ice cores by three main
54
approaches. These involve the analysis of: a) the (isotopic)
composition of the water in the ice (see below); b)
dissolved and particulate matter in the ice; and c) the
physical characteristics of the firn and ice, and of air
bubbles trapped in the ice, such as carbon dioxide and
methane concentrations in air bubbles trapped in the ice.
Carbon dioxide concentrations correspond with other
indicators; carbon dioxide values were low during colder
periods and higher during warmer phases. The majority of
research has taken place on the Greenland ice sheet by the
European Greenland Ice Core Project (GRIP) and by their
North American counterparts the Greenland Ice Sheet
Project 2 (GISP2).
The Russians have also drilled
boreholes into the ice and the most famous is Vostok in
Greenland.
These two teams have retrieved ice core
samples totalling just over 3km in length i.e. 3km deep into
the ice.
Oxygen isotopes taken by these teams of
scientists have been used to deduce the past climate over
the last 420 000 years. Ice core records go back no further than this, but evidence recorded in deep
marine sediments do (see later notes).
3. Dendrochronology
The study of the relationships between annual tree growth and climate is called dendrochronology. Trees
record climatic conditions through growth rates. Each year, a tree produces a growth ring made up of
two bands: a band reflecting rapid spring growth when the cells are larger, and a narrower band of
growth during the cooler autumn or winter. The width of the tree ring indicates the conditions that
prevailed during its growth cycle, a wider ring indicating a warmer period. The change in ring width from
year to year is more significant than the actual width because bigger growth rings tend to be produced
during the early years of growth, irrespective of the weather. It is possible to match and overlap
samples from different sources, for example from living trees (e.g. Bristlecone Pines), trees preserved
for 10, 000 years in river terraces in Europe and from beams from older houses, all extend the dating
further back in time.
NB. Bristlecone Pines in California which have been living for the past 5,000 years, give a very acurate
measure of the climate.
55
However dendrochronology is fraught with complications and limitations. For instance there are other
factors apart from temperature which affect tree growth, for instance soil type, rainfall, human
activity, light, carbon dioxide concentration and disease.
4. Pollen Analysis (and movement of vegetation belts)
Many plant species have particular climatic requirements which
influence their geographical distribution. Pollen grains can be
used to determine the vegetation changes and by implication, the
changes in climatic conditions. The first plants to colonise land as
climate warms up after an ice age are low tundra plants, mosses
and heather. Once these become established trees like birch,
pine and willow will start to grow. Eventually when conditions are
56
much milder trees like oak and elm flourish. So by boring into peat bogs (which preserve the ancient
pollen grains) pollen from different plants gives an indication of the climate at that time.
5. Oxygen Isotope Analysis
As long ago as the 1950's a significant breakthrough in knowledge about past climate change came from
the analysis of tiny fragments of calcium carbonate (shell material) that constantly accumulate on parts
of the deep ocean floor. These are the shells of single-celled marine organisms called formanifers which
make up elements of planktonic life in the surface waters. At the time of their formation, the shells lock
up key information about oxygen isotopes present in the ocean surface water. It was discovered by
Emiliani in 1954 that the ratio of the heavy isotope of oxygen (O18) to the lighter one (O16) can be
interpreted to give an estimate of sea surface temperature. When ice sheets grew during colder glacial
times, the evaporation of water from the oceans was reduced. Because of this reduction the lighter and
more readily evaporated O16 was preferentially evaporated into the air, leaving the oceans relatively
enriched in the heavier O18 isotope. This means that the shells of foraminifers growing during glacial
times were relatively enriched in O18, whereas the oxygen locked in the air bubbles in the growing ice
sheet was relatively enriched in the lighter O16 isotope.
Data from the Vostok and GISP2 cores for the last 120 000 years showing peaks of 018 (relatively
warm) and troughs of O18 on the graph (relatively cool). Notice the trend in greater O18 isotopes in the
ice accumulating in the ice sheet over the last 15 000 years. This suggests a warmer interglacial period.
57
6. Deep Marine Sediments
Deep marine sediments also record changes in the past climate as silts and lime muds are deposited over
time. Within these muddy layers are trapped microfossils of foraminifera (forams) and diatoms which
have CaC03 (calcium carbonate) shells made up from the oxygen held in the ocean water at the time of
their formation. Therefore, oxygen isotope ratios can be calculated and shells enriched in O18 would
indicate cooler conditions. The main advantage of deep sea sediments as a record of climate change is
they date much further back in time than the the 420 000 year record obtained from the ice sheets.
Deep marine sediments also provide evidence of past climatic temperature deduced from the most
abundant species of single celled foriminifera (Forams) and Diatoms in a sample which can be recognised
by their test (exoskeleton) morphology (shape). This is determination of species based on shape is
termed morphospecies, it is estimated that there are over 4000 such species living in the benthic
environment (deep marine). Studies of present day marine environments by marine biologists and
micropalaeontologists have highlighted the common species of diatoms and forams that occur in waters
of certain temperatures. By using the present is the key to the past principle climate scientists
researching the in the field of micropalaeontology can use this modern day species distribution to infer
the temperatures found in past deposits based on the microfossils found in a sample. That is to say
different shaped foraminifera prefer different environments.
Diatoms
Foraminifera
7. Glacial and Post Glacial landscapes and Deposits
Glacial Landscapes such as those found in North Wales, the Lake District and Scotland in particular
could not have been caused by present day climatic conditions so therefore indicate the role of ice. Cwn
Idwall and the Nant Ffrancon as well as the tills at Aberogwen indicate glacial erosion and
glacial/fluviouglacial deposition respectively. Periglacial overprinting after the ice diminished has left
its effects plain on the landscape such as patterned ground in the NE Cairngormes and the Tors on
Dartmoor.
58
8. Radio Carbon Dating
Modern methods of dating materials, such as carbon-14 dating, allow specific dates to be added to the
sequence of temperature changes identified from pollen and dendrochronology. All vegetation fix (take
into their structure) carbon dioxide from the atmosphere via photosynthesis and store it as
carbohydrate (CH2O) such as starch. C14, found in bone/wood from prehistoric organic remains is
unstable whereas C12 and C13 are stable. C14 has a half Life of 5,730 years +/- 40 years i.e. Half of C14 present in a sample will decay during this time period. If a scientist compares the amount of C14
isotopes in a prehistoric sample with the amount of C14 in a sample from a growing plant today the age
can be deduced. After ten half lives there is not much C14 left in the sample therefore this method is
only accurate to 50,000 years BP.
9. Insect Analysis and Coleoptera (Beetles)
These are insects with the largest known number of
species. Among the beetles (Coleoptera) alone there are
about 350 000 different species- more than all flowering
plants combined – so, not surprisingly many are adapted
highly to specific ecological niches. For example many
species have fastidious preferences for warmth or cold.
Insect taxonomy is based on the characteristics of their
exo-skeletons.
Skeletons are often well preserved in
sediments, but become disaggregated into their
component parts (heads, thorax etc.).
To date the main contribution has come from fossil Coleoptera, in particular in the reconstruction of the
late Pleistocene palaeoclimates in mid-high latitude regions such as northwest Europe. Their thermal
likes and dislikes coupled with their ability to migrate rapidly, make Coleoptera the ideal indicators of
past temperatures. Studies at a variety of European sites have allowed Russell Coope (1975;Coope and
Lemdhal, 1995) to estimate average July temperature changes before 15 000 to about 10 500 Cal.yr
BP.At the beginning of this period temperate assemblages were replaced by arctic ones, and at
Glanllynnau in North Wales –at least an amazingly rapid 1oC per decade. Beetle assemblages have now
been used to provide evidence of both winter and summer temperatures using the mutual climatic range
method (Aktinson et al., 1987). In the Holocene (after 11 500 Cal. year BP) the use of Coleoptera is
much more problematic.
59
(c) Recent Global Warming - its Causes and Effects
Causes:
The Earth has warmed up by about 0.6°C in the last 100 years. During this period, man-made emissions
of greenhouse gases have increased (eg. carbon dioxide concentration in the atmosphere has risen to
370 parts per million (ppm) from 270 ppm), largely as a result of the burning of fossil fuels and
deforestation. In the last 20 years, concern has grown that these two phenomena are, at least in part,
associated with each other. That is to say, global warming is now considered most probably to be due to
the enhanced greenhouse effect. Other greenhouse gases released by mans activities may be also
playing a role in the onset of anthropogenic (man-made) global warming.
For instance rapid
development in LDC‟s
is driving demand for cheap meat production which is firstly causing
deforestation to allow low grade pasture for grazing. Secondly cheap meat production releases methane
into the atmosphere which is a greenhouse gas that causes warming.
(i) Effects on the UK
Introduction: Most critical of the risks associated with global warming is an increase in frequency and
intensity of extreme weather such as hot spells, drought and storms. Accompanying a projected rise in
average surface temperature of between 0.9 and 2.4°C by 2050 will be the increased occurrence of
hot, dry summers, particularly in the southeast. Mild wet winters are expected to occur more often by
the middle of the 21st century, especially in the northwest, but the chance of extreme winter freezing
should diminish.
Higher temperatures may reduce the water-holding capacity of soils and increase the likelihood of soil
moisture deficits, particularly if precipitation does not increase as well. These changes would have a
major effect on the types of crops, trees or other vegetation that the soils can support. The stability
of building foundations and other structures, especially in central, eastern and southern England, where
clay soils with a large shrink-swell potential are abundant, would be affected if summers became drier
and winters wetter.
Any sustained rise in mean surface temperature exceeding 1°C, with the associated extreme weather
events and soil water deficits, would have marked effects on the UK flora and fauna. There may be
significant movements of species northwards and to higher elevations. Predicted rates of climate
change may be too great for many species, particularly trees, to adapt genetically. Many native species
and communities would be adversely affected and may be lost to the UK, especially endangered species
which occur in isolated damp, coastal or cool habitats. It is likely that there would be an increased
invasion and spread of alien weeds, pests, diseases and viruses, some of which may be potentially
harmful. Increased numbers of foreign species of invertebrates, birds and mammals may out-compete
native species.
Climate changes are likely to have a substantial effect on agriculture in the UK. In general, higher
temperatures would decrease the yields of cereal crops (such as wheat) although the yield of crops
such as potatoes and sugar beet would tend to increase. However, pests such as the Colorado beetle on
potatoes and rhizomania on sugar beet, currently thought to be limited by temperature, could become
more prevalent in the future. The length of the growing season for grasses and trees would increase by
about 15 days per degree Celsius rise in average surface temperature, an increase that could improve
the viability of crops such as maize and sunflower, which are currently grown more in warmer climates.
Increases in Eustatic sea level (Global), and the frequency and magnitude of storms, storm surges and
waves would lead to an enhanced frequency of coastal flooding. A number of low-lying areas are
particularly vulnerable to sea level rise, including the coasts of East Anglia, Lancashire, Lincolnshire and
Essex, the Thames estuary, parts of the North Wales coast, the Clyde/Forth estuaries and the Belfast
Lough. Flooding would result in short-term disruption to transport, manufacturing and housing, and
60
long-term damage to engineering structures such as coastal power stations, rail and road systems. In
addition, long-term damage to agricultural land and groundwater supplies, which provide about 30% of
the water supply in the UK, would occur in some areas due to salt water infiltration.
Water resources would generally benefit from wetter winters, but warmer summers with longer growing
seasons and increased evaporation would lead to greater pressures on water resources, especially in the
southeast of the UK. Increased rainfall variability, even in a wetter climate, could lead to more droughts
in any region in the UK. Higher temperatures would lead to increased demand for water and higher peak
demands, requiring increased investment in water resources and infrastructure. An increase in
temperature would increase demand for irrigation, and abstraction from agriculture would compete with
abstractions for piped water supply by other users.
Higher temperatures would have a pronounced effect on energy demand. Space heating needs would
decrease substantially but increased demand for air conditioning may entail greater electricity use.
Repeated annual droughts could adversely affect certain manufacturing industries requiring large
amounts of process water, such as paper-making, brewing and food industries, as well as power
generation and the chemical industry.
Sensitivity to weather and climate change is high for all forms of transport. Snow and ice present a very
difficult weather related problem for the transport sector. A reduction in the frequency, severity and
duration of winter freeze in the British Isles would be likely under conditions associated with global
warming and could be beneficial. However, any increase in the frequency of severe gale episodes could
increase disruption to all transport sectors.
The insurance industry would be immediately affected by a shift in the risk of damaging weather events
arising from climate change in the British Isles. If the risk of flooding increases due to sea level rise,
this would expose the financial sector to the greatest potential losses.
UK tourism has an international dimension which is sensitive to any change in climate which alters the
competitive balance of holiday destinations worldwide. If any changes to warmer, drier summer
conditions occur, this could stimulate an overall increase in tourism in the UK. However, any significant
increase in rainfall, wind speed or cloud cover could offset some of the general advantages expected
from higher temperatures. The British Ski industry would be an example of tourism that would be
adversely affected by rising temperatures.
Interestingly a rise in temperatures means that many of the UK peat bogs that formed just after the
last Pleistocene glaciation are shrinking as they dry out. This has negative knock on effects for species
diversity as well as allowing carbon dioxide that has been trapped for many years in organic matter to be
liberated as the peat bogs dry out which in effect reduces the NET carbon dioxide stores available on
land. Durham University is currently leading cutting edge research into this phenomena by studying the
peat bogs of Upper Teesdale near Middleton in Teesdale. This is similar to the process that happens in
the oceans. As temperatures rise more CO 2 is lost form the oceans which in turn causes positive
feedback and even more warming of the system.
In light of the recent political attention given to this issue and the associated media coverage ‗climate
change‘ looks set to be a key issue for scientists to resolve for decades to come. Scientists know much
about the causes of climate change but are still undecided upon the likely outcome in terms of global
temperatures the planet will experience in the next 100 years. Equally scientists are divided in terms of
the most appropriate action to take, if any at all to curb the recent warning. In fact there still remains a
huge division in the scientific community over this issue where some regard the recent warming as just a
mere warmer period caused by the earths natural cycles of warmer and cooler periods while others
regard the recent warming a result of human activity and believe the temperatures experienced in the
coming years are likely to be higher than those ever before experienced by man.
61
(ii) International Effects
The Intergovernmental Panel on Climate Change (IPCC) report presents a stark warning of the possible
effects resulting from accelerated anthropogenic climate change. Changes that are evidenced now and
predicted future changes are listed below:
Melting Polar Ice Caps
The scientific community are now in agreement that recent global warming has been responsible for
a rapid and large scale shrinking of polar ice. In fact actual rates of ice break and loss are much
greater than previously expected a decade ago and temperature have risen twice as fast in the
Arctic than anywhere else. Ice is being lost at a rapid rate on both main ice sheets, namely
Antarctica (the biggest ice mass) and the Greenland Ice Cap in the Arctic. For example, the largest
single block of ice in the Arctic, the Ward Hunt Ice Shelf, had been around for 3,000 years before
it started cracking in the year 2000. Within two years it had split all the way through and is now
breaking into pieces. The situation appears to be similarly chronic in the Southern Ocean, for
instance, during the 31st January 2002 to the 3rd July 2002 a section of the Larsen Ice Shelf in
Antarctica broke up. The Larsen B sector collapsed and broke up, 3,250 km² of ice 220 m thick
disintegrated, meaning an ice shelf covering an area comparable in size to the US state of Rhode
Island disappeared in a single season. Larsen B was stable for up to 12,000 years, essentially the
entire Holocene period since the last glacial period.
The Arctic has lost 1.7 million km2 of ice since 1980 shrinking to an area of 6.1 million km2 in
2005. This is a rate of 9% loss in the Arctic per decade. This has lead to some members of the
scientific community to forecast a dire worst case scenario of a total loss in arctic sea ice by 2050
(i.e. in your life time!!!) and a more unlikely date as early as 2013. Between 1980-2001, thirty of the
world‘s glaciers had thinned significantly by 6m.
62
Location of Larsen B
The net result of ice break up of
land ice like this is sea levels rise
(as more water is added into the
system), remember this would not be
the case for melting sea ice as a
specific volume of sea ice already
displaces its volume of water as it is
floating in the sea, therefore
melting sea ice has no effect on
eustatic (global) sea level,
As well as rising sea levels the lack
of polar ice reduces the cooling
effect the ‗great ice sheets‘ have on
the oceans so the oceans become
warmer!
Also ice reflects more
radiation back to space as it has a
high albedo, the lack of ice
therefore allows the Earth to
absorb more heat.
Both of these factors mean that the
oceans are expanding.
The most
obvious affect of this is that eustatic (global) sea levels will rise (see notes below).
The melting of the World‘s cold environments goes beyond ice sheets, for instance as mentioned in
the last section work by Durham University suggests that drying out of peat bogs is releasing more
C02 globally as the peat shrinks. Similarly to peat, permafrost (permanently frozen ground) around
the edges of the ice caps in periglacial regions is melting at an unprecedented rate. This thawing of
the permafrost whole sale in places like Siberia is allowing trapped methane (CH4) to be liberated
into the atmosphere. Methane is also a greenhouse gas which has the NET result of causing further
warming which may well cause more melting of the permafrost accelerating this process further.
One effect of melting huge volumes of shelf
ice and adding therefore enormous volumes of
fresh water is that the North Atlantic Drift,
part of the oceanic thermohaline circulation
system might shut down due to desalination of
the oceans. It is the North Atlantic Current
that gives us as well as the USA our warm
climate for our latitude. If the circulation
shuts down then the effect may well be a
short lived (on a geological scale) cooler period
where ice sheets actually advance again as
they did at the end of the last Pleistocene
glaciation. In the Allerod at 15, 000 years ago the temperatures rose as the ice sheets receded and
finally disappeared meaning much of the ice cap held on North America melted and drained into the
Atlantic causing desalination and led to the shutdown of the North Atlantic Drift which plunged
parts of the Northern Hemisphere back into a short lived cold period where ice actually re-advanced
in the Younger Dryas and lasted for 1000 years.
63
Rising Sea Levels
Both increased water from melting glaciers and land-based ice sheets as well as thermal expansion of
water due to increased heating are leading to higher eustatic sea levels. According to the IPCC global
sea levels rose at an average rate of 1.8mm/year between 1961 and 2003. The rate being faster from
1993-2003 at about 3.1 mm/year, with the total 20 th century rise being about 17cm. The aptly named
Stern Report predicts sea levels will rise from 28-43cm by 2080 although some authorities predict that
complete melting of polar ice may occur and increase sea levels by 4-6m.
Rising sea levels when coupled with the more unpredictable stormy weather a warmer global climate is
likely to bring would have really adverse affects on low-lying areas of the world. More coastal erosion
and coastal flooding would occur as well as contamination of underground water sources For instance
much of the Netherlands and Bangladesh would be adversely affected by coastal flooding. As well as
island nations like the Maldives; over half of that nation's populated islands lie less than 6 feet above
sea level. Even major cities like Shanghai and Lagos would face similar problems, as they also lie just six
feet above present water levels. The problem of rising sea levels will even pose a problem closer to
home here in the UK. Low-lying estuaries such as the Thames estuary and East Anglia could be badly
affected by coastal flooding. London would pose a huge risk and damage to the national and global
economy could result if large scale floods occur in the future. For instance in 2007 the Thames Barrage
was nearly overtopped by a higher than expected storm surge. If this coincides with a high spring tide
in the future many experts fear overtopping could be likely.
Rising seas would severely impact the United States as well. Scientists project as much as a 3-foot sealevel rise by 2100. According to a 2001 U.S. Environmental Protection Agency study, this increase would
inundate some 22,400 square miles of land along the Atlantic and Gulf coasts of the United States,
primarily in Louisiana, Texas, Florida and North Carolina. This would therefore seal the fate of New
Orleans as uneconomic to redevelop.
Food Shortages
A warmer Arctic will also affect weather patterns and thus food production around the world. Although
the planet is generally getting warmer and therefore more water available, some areas are expecting
less rainfall and some areas will expect to receive more. i.e. it won‘t be an even pattern of precipitation
spread across the globe.
Countries that are less developed and therefore less able to cope with food shortages are expected to
be hit worse by the effects of changing climate and shifting weather patterns. For instance much of
Africa, the Middle East and India are expecting considerably lower cereal yields as the result of lower
rainfall in these areas.
Whereas places like Bangladesh may expect to see higher rainfall during the
Monsoon and more extreme flooding.
Wheat farming in Kansas, for example, would be profoundly affected by the loss of ice cover in the
Arctic. According to a NASA Goddard Institute of Space Studies computer model, Kansas would be 4
degrees warmer in the winter without Arctic ice, which normally creates cold air masses that frequently
slide southward into the United States. Warmer winters are bad news for wheat farmers, who need
freezing temperatures to grow winter wheat. And in summer, warmer days would rob Kansas soil of 10
percent of its moisture, drying out valuable cropland.
Health
By 2080 290 million more people may well be exposed to an increased risk of Malaria, especially in China
and central Asia. As areas these areas become wetter and will receive a higher proportion of rainfall it
64
would be logical to expect an increase in Malaria spreading mosquitoes that live in swampy areas. Other
water-borne diseases may well increase in such areas especially as the climate becomes milder.
Health will also be adversely affected by increased malnutrition in Africa especially, as rains and crops
failure becomes more common.
Ecological Damage/Extreme Weather Events
As the polar regions continue to warm habitats may well become lost in both Antarctica and the Arctic.
For instance habitat for Whales and dolphins may diminish as the food chain breaks down. Other
species of sub-arctic flora and fauna may also be in tundra areas. This will threaten the way of life of
native people like the Inuit.
Forest and tundra ecosystems are important features of the Arctic environment. In Alaska, substantial
changes in patterns of forest disturbance, including insect outbreaks, blow down, and fire, have been
observed in both the boreal and southeast coastal forest. Rising temperatures have allowed spruce bark
beetles to reproduce at twice their normal rate. A sustained outbreak of the beetles on the Kenai
Peninsula has caused over 2.3 million acres of tree mortality, the largest loss from a single outbreak
recorded in North America. Outbreaks of other defoliating insects in the boreal forest, such as spruce
budworm, coneworm, and larch sawfly, also have increased sharply in the past decade.
Climate warming and insect infestations make forests more susceptible to forest fire. Since 1970, the
acreage subjected to fire has increased steadily from 2.5 million to more than 7 million acres per year.
A single fire in 1996 burned 37,000 acres of forest and peat, causing $80 million in direct losses and
destroying 450 structures, including 200 homes. As many as 200,000 Alaskan residents may now be at
risk from such fires, with the number increasing as outlying suburban development continues to expand.
The increase in forest fires also harms local wildlife, such as caribou
Extreme weather events like the great UK storm of 1987 and Hurricanes such as Mitch and Katrina may
become more common. As well as these weather events it is probable that droughts and heat waves will
also become more common for certain regions.
Severe Water Shortages
Reduced rainfall coupled with the salination of coastal water sources by sea water flooding and saltwater
incursion of aquifers will result in less water availability for drinking, irrigation and industry, This may
well lead to future water wars in the Middle East in places like Israel as well as those living in India.
These places will be the worst affected. It is expected that 3 billion people could suffer water stress
by 2080.
Mass migration / War and Tension
Changing climate and weather patterns in places like Africa especially may result in mass migration of
people across borders from one country to another. For instance if rains fail in the wet season causing
drought, then crop failure and famine may well result causing migration of refugees to places of refuge
and food supply. In such harsh conditions this may lead to civil war and unrest.
65
(iii) Effects on the Wet/Dry Savanna (Tropical Continental Climate in W. Africa)
Climate change could have far reaching effects on the global climate as temperatures continue to rise
and as the amount of water circulating in the atmosphere increases in its proportion. These effects
could be especially felt in the sensitive Savanna regions which experience a tropical continental climate,
one such place being Kano, Nigeria (12oN).
Savanna grasslands represent unique sensitive ecosystems which are characterised by mainly tall
elephant grass which is broken by scattered isolated trees and shrubs. Many scientists fear that this
ecosystem could be taken over by woody trees and shrubs which are likely to colonise through
vegetation succession as the Sahel experiences greater precipitation over the next 50 years.
The savannas in their current state are both ecologically unique and economically vital for the survival of
communities who live in these areas. Colorado State University published research in the scientific
Journal Nature (2005) that rainfall is the most important controlling factor on savanna development.
From this work came two classifications:
Stable savannas - are those that receive less than 650mm of rainfall per year and subsequently
allows tree growth to be restricted and therefore grasses can co-exist.
Unstable Savannas – receive more than 650mm rainfall per year. The amount of trees in such
savannas is not controlled by the amount of rainfall alone at present, but by the regulating
effect of fires and grazing by wild animals which clear grasses and encourage tree growth.
Trees in these regions such as the Baobab Tree have become adapted to fires (pyrophytic
adaptations such as thick fire resistant bark) and survive preferentially over grasses.
Kano currently has an average rainfall of 1040mm so therefore the savanna is already classed as
unstable and increased rainfall can only act to further encourage tress and result in the demise of the
grassland habitat! Complete loss of this sensitive ecosystem is likely in this region in the next 50 years
as the average precipitation is set to increase in the majority of climate change models.
The balance between trees and grasslands influences vital characteristics of the ecosystem such as
livestock production as well as water balance and as a result drinking water supplies. Changes to the
grasslands that cause a reduction in the species diversity would be fundamentally detrimental to local
indigenous tribes who have adopted a sustainable way of life over hundreds of years. As well as
threatening the viability of indigenous populations, climate change may well damage local tourism in
countries such as Nigeria that rely on it heavily for revenue which funds investment and development.
Climate change may well threaten and decrease the species diversity and ecology of game reserves that
so many tourists come especially to witness. Although more trees would mean more elephants generally,
an invasion of trees would cut down on the number of other large mammals in the savanna which is bad
news for safari based tourism.
Diagram below is for illustration only to show increased rainfall would produce a savanna more characteristic of the
equatorial latitudes i.e. parkland „closed savanna‟:
66
It is very important to understand what drives the savannas in order to help manage them as climate
conditions change. Two schools of thought exist when trying to explain their origin, firstly rainfall is
important in their development and secondly disturbances such as fire and grazing help regulate them.
Initially savannas were classified as a climate type under Koppen‟s early classification (as rainfall is
controlled by climate) in the early 1900‘s as other controlling factors such as disturbances were poorly
understood. It is most likely that both of these factors are responsible for regulating the savanna
depending on the point at which they occur in the season (i.e. wet/dry). There is considerable dispute
about the future of much of North Africa under climate change, underscoring the difficulties in
assessing one of the most complex mechanisms on the planet.
However, climate change might have some beneficial effects too for the tropical continental regions of
West Africa. Rising temperatures in the Sahara desert could actually be beneficial, reducing drought
in the Sahel region immediately south of it. Reindert Haarsma and colleagues, of the Royal Netherlands
Meteorological Institute were the first to consider the roles of both land and sea-surface
temperatures.
The Haarsma computer model suggests that if emissions of greenhouse gases are not reduced, higher
temperatures over the Sahara would cause 25% to 50% extra daily rainfall in the Sahel by 2080
during the months from July to September. The Sahara desert heats up faster than the oceans,
creating lower atmospheric pressure above the sands. This in turn leads to more moisture moving in from
the Atlantic to the Sahel. Also warmer air has a higher capacity to hold moisture allowing greater
precipitation to the south of the Sahara Desert.
Additional rainfall would allow greater agricultural yields, growth of a greater range of crops and
create a longer growing season as it is likely that the wet season may lengthen. Evidence from the
Journal Biosciences (2002) confirms notions of a wetter climate and suggests that further north in the
dry desert climatic zone, parts of the Sahara Desert are showing signs of ‗greening‘ due to increased
rainfall (1982-2002). This has made some in the scientific community predict a return to the Sahara
Desert being a lush green savanna again as it was some 12,000 years ago!
However on the downside climate change could disrupt the pattern of seasonal rains that is brought
about by the movement of the ITCZ north over Kano. It is the associated movement of the ICTZ
seasonally in the summer months that brings Kano the SW Monsoon and therefore its wet season. If
the ITCZ gets ‗stuck‘ too far south or fails to move as far north as 12 oN early enough in the wet season
then the rains of the Monsoon may not come and areas such as Kano may experience crop failures.
Even if there is more rain per year on average for this region, it is likely that rains may become more
unreliable and the above effects more likely.
Increased rainfall and increased likelihood of frequent extreme weather events such as prolonged
monsoon rainstorms would increase the likelihood of flooding in the region. Flash floods apart from
obvious short term effects such as damage to homes, business and people could also result in long term
effects such as soil erosion and actually lead to subsequent reduction in agricultural yields in places
least equipped to deal with such losses.
A wetter climate in this part of West Africa may also mean an increase in mosquitoes and therefore an
increase in malaria cases. Malaria is already endemic in Nigeria according to the WHO where the
mortality rate in children under five 729 per million. Other water borne diseases may become more
common as temperatures rise and the climate becomes wetter
The other side of the argument goes however, that global warming may have the opposite effect on this
region and cause drier conditions for West Africa as well as much of the continent as a whole and
desertification may result. Some predictions estimate a 50% drop in yields from rain fed agriculture
by 2020.
67
Whichever prediction, if any comes to be the correct outcome for Tropical Continental West Africa in
the future, it is important to note that there will be regional disparities in terms of magnitude of the
effects suffered. For instance more marginal areas to the north of the region may suffer greater
hardship from increasingly unreliable rainfall patterns and shortened growing season and a more
unpredictable climate. Although not technically in the tropical continental region, places like Sudan
would undoubtedly suffer significantly from the effects of lower rainfall and famines such as those
experienced in the 1980‘s. Famines in the Sahel region would lead to mass migration of people to
neighbouring countries and humanitarian disasters would therefore be common place.
68
(iv) Responses to climate change
Stern Report: The Stern report, the Treasury‘s comprehensive analysis of the economics of climate
change, estimates that not taking action could cost from 5 to 20 per cent of global GDP every year,
now and in the future. In comparison, reducing emissions to avoid the worst impacts of climate change
could cost around one per cent of global GDP each year.
Other Findings:
The report identifies that there is still time to act to avoid the worst impacts of climate change.
Climate Change would have serious impacts on growth and development.
The costs of stabalising the climate are high but are manageable, but waiting and doing nothing
will have much greater costs in the future.
Action on climate change is required across all countries and it need not cap the aspirations of
economic growth in the richest or poorest countries.
A range of options exists to cut emissions but strong deliberate policy is required to encourage
their take up.
Climate change demands an international response based on a shared understanding of common
goals and therefore agreement on frameworks for action.
Future key international frameworks should include: emissions trading, technology co-operation,
action to reduce deforestation.
International:
The global nature of the threat of climate change means that it must be tackled through international
co-operation, common policies and unified actions e.g. burning fossil fuels in China will have an impact on
the opposite side of the world so therefore we must take a global perspective. In reality these
intentions are have been difficult to realise!
The Kyoto Agreement (Protocol)
Is an agreement or rule implemented by the United Nations Framework Convention on Climate Change
with the intention of combating global warming. The aim is to stabilise greenhouse gas emissions to
such a level that further damage to the world‘s environmental system is halted.
Recognising that developed countries are principally responsible for the current high levels of GHG
emissions in the atmosphere as a result of more than 150 years of industrial activity, the Protocol places
a heavier burden on developed nations under the principle of
responsibilities.‖
―common but differentiated
There are 5 main principal concepts to the agreement:
Reduce greenhouse gases by committing Annex I countries to legally binding emissions limits;
Implementation to meet objectives i.e. prepare policies and measures to reduce greenhouse
gases, increase absorption of gases and use all other mechanisms available to reduce levels such
as emissions trading / credits;
Minimise risks to developing countries by establishing a climate change adaptation fund that
richer states contribute to in order to help developing states over come future challanges.
Account / review / report integrity of the Protocol;
Aid compliance of countries to the protocol by establishing a compliance committee to police it.
The agreement was first signed 11th December 1997 in Kyoto Japan. 187 states have signed and
ratified the agreement to date.
Under the protocol 37 industrialised countries (Annex I) have
committed themselves to reduce the levels of the 4 most common greenhouse gases (Carbon Dioxide
69
(CO2), Methane (CH4), Nitrous Oxide (NO), Sulpher hexafluoride (SF6) CFC‘s are controlled under a
different agreement called Montreal Agreement) The Annex I counties committed to reduce their
collective greenhouse gas emissions by 5.2% of 1990 levels. However this target does not include
aviation or shipping. All other member countries have also pledged to reduce their emissions.
The EU (European Union) and its member states ratified the agreement in May 2002 and Russia
November 2004 clearing the way for the treaty to become legally binding by 16 February 2005.
The
USA is the most notable nation that is non-party to the protocol (under the past Bush administration)
even though the USA accounts for 36% of greenhouse gases at 1990 levels! The USA uses the noninclusion of China and India as an excuse to opt out of the agreement.
The Kyoto Protocol is generally seen as an important first step towards a truly global emission
reduction regime that will stabilize GHG emissions, and provides the essential architecture for any
future international agreement on climate change.
By the end of the first commitment period of the Kyoto Protocol in 2012, a new international
framework needs to have been negotiated and ratified that can deliver the stringent emission
reductions the Intergovernmental Panel on Climate Change (IPCC) has clearly indicated are needed.
Unfortunately the UK is not alone in not being able to reach its target of reducing emissions by 20%
of 1990 levels by 2010 and has revised its time scale to a more realistic cut of emissions by 6 0% by
2050. Emissions in the UK are actually on an upward trend due to an increase from the energy sector.
Carbon Credits
What emerged from the discussions held during the construction of the Kyoto Protocol is that all
nations release CO2, this CO2 must therefore be absorbed via tree planting or other process that can
absorb it such as sequestration. Or secondly a country could just cut its CO2 emissions in the first
place. If that country produces more CO2 than it can absorb within that country, it must purchase an
‗absorption ability‘ from another nation that has not produced as much CO2 as it can potentially absorb.
The absorption ability purchased is a Carbon Credit and it is equal to one tonne of CO2 called a CO2 e
(CO2 equivalent). A nation might have a shortfall of 500,000 CO2 credits as it produces an excess of
CO2 compared to that it absorbs. In this scenario a heavily polluting nation must then buy credits from
another nation that has CO2 absorbing ability for instance through the planting of trees that would fix
and soak up excess CO2. The cost per credit can be anywhere between $10-40. This therefore makes
an economy that relies heavily on carbon uneconomic and this financially discourages this type of
behaviour. The planting of trees as long as they are not later cut and burned reduces CO2 in the
atmosphere as does encouraging ploughing that discourages CO2 release during harvesting. Forests can
be left to stand and weeds and hedge rows can be encouraged between fields. Fuel consumption can be
cut and power generation can be made more efficient in order to reduce CO2 usage. This increase in
stores and decrease in emissions means that a nation will need to buy less (or could even sell its credits
if it has a surplus) carbon credits making it a stronger economy. This is what is referred to as a low
carbon economy and illustrates how leading the way in low carbon initiatives could be highly profitable
for a nation. The money used to purchase credits will ultimately be returned to developing new low
carbon / energy efficient technologies. For instance New Zealand has already funded some of its new
wind farms from the profit it has made from carbon trading. Ireland has purchased 95% of its credits
in contrast from overseas in order to offset its heavy reliance on industry based around fossils fuels.
Critics to the scheme point out that the richest nations will just simply pay as you go to pollute in order
to fuel industry and emissions may not actually be cut at all as economies continue to grow. Other
problems exist whereby some industries suffer more than others, for instance aviation is becoming an
increasingly uncompetitive business and the downfall of BA can in part be attributed to the fact that is
70
discouraged from expanding operations. This is also true for Virgin airlines who wanted to expand into
Australia but could not do so as carbon trading credits made it uneconomic to do so.
Other problems exist as in the UK for instance the CO2 emissions are lower than they should be as most
of the consumable goods such as Appliances, Clothes and Textiles as well as the majority of things we
buy on the high street are no longer made locally in the UK but are outsourced to be made cheaply in less
developed countries such as China. As china is the factory for the World it therefore incurs huge
amounts of carbon emissions.
In the future the result of building a low carbon economy whereby the cost of a product is also
controlled by its CO2 footprint may mean that instead of an increasingly globalised world economy that
we have seen over recent decades, we may well see a return to reliance on local economies where trading
occurs locally.
Carbon Capture
To prevent the carbon dioxide
building up in the atmosphere, we
can catch the CO2, and store it. As
we would need to store thousands
of millions of tons of CO2, we
cannot just build millions of
containers, but must use natural
storage facilities. Some of the best
natural containers are old oil and
gas fields, such as those in the
North Sea.
The diagram on the left shows a
conceptual plan for CCS, involving 2
of the common fossil fuels, methane
gas (also called natural gas) and
coal.
Methane gas is produced from offshore gas fields, and is brought onshore by pipeline. Using existing oilrefinery technology, the gas is 'reformed' into hydrogen and CO 2. The CO2 is then separated by a newlydesigned membrane, and sent offshore, using a corrosion-resistant pipeline. The CO2 goes to an oilfield.
The CO2 is stored in the oilfield, several km below sea level, instead of being vented into the atmosphere
from the power station
71
Post-combustion capture involves removing the dilute CO2 from flue gases after hydrocarbon
combustion. It can be typically built in to existing industrial plants and power stations (known as retrofitting) without significant modifications to the original plant. This is the type of technology favoured by
the UK Government in its competition for state support.
There are several methods that can be used to capture the CO2. The most common method is passing
the CO2 through a solvent and adsorbing it and amine solvents are typically used. A change in
temperature and/or pressure will then release the CO2. Another process in development is calcium cycle
capture where quicklime is used to capture the CO2 to produce limestone, which can then be heated to
drive off the CO2 and quicklime which can then be recycled. All of these require additional energy input
to drive off the CO2 from the solvent - this typically results in extra energy costs of 20-30% compared
to plants with no capture. New solvents are under development to reduce these penalties to 10%.
Other post-combustion possibilities, currently being researched, include cryogenically solidifying the
CO2 from the flue gases, or removing CO2 with an adsorbent solid, or by passing CO2 through a
membrane.
Pros:
Feasible to retrofit to current industrial plants and power stations.
Existing technology - 60 years experience with amine solvents - but needs 10x scale-up.
Currently in use to capture CO2 for soft drinks industry.
Cons:
High running costs – absorber and degraded solvents replacement.
Limited large scale operating experience.
Energy Production and the role of Renewables
Globally energy production accounts for 60% of emissions and the remainder is from private and other
industries. In the UK however, 80% of our energy comes from non-renewable fossil fuels that release
huge amounts of CO2. This problem can be dealt with by:
Reducing the emissions from power stations before they are released;
Using alternative sources of renewable energy
Reducing demand for energy by using less in industry, homes and transport.
Work needs to be done to adopt renewable energy resources. For instance in the UK mechanisms have
been put in place to reduce our dependency on fossil fuels by 10% by 2013 by adopting renemables such
as:
Wind Energy
Geothermal Power
Ground Source Heat Pumps
Waste fired power stations using animal and flood waste
Biodiesel
Solar energy
72
Tidal energy
National:
EU emissions trading System UK low carbon transition plan
Emissions trading allows the government to regulate the total amount of emissions produced in the
country by setting an overall cap for the scheme. They then allow individual industries and companies
the flexibility to decide how and where the emissions reductions will be achieved. Participating
companies are allocated allowances but can emit in excess of their allowance by purchasing additional
allowances from the market that other industries have sold back because they have may well met their
target. The environmental well being of this is not compromised as the total amount of allowances
remains fixed (think of the total amount of money is fixed in a game of monopoly). The EU climate
change programme attempts to address the need to reduce atmospheric CO2 by means of the EU
Greenhouse Gas Trading Scheme. Members can either make the savings in their own country or buy
these emissions reductions from other countries.
Carbon Trust
The Carbon Trust is an independent company set up in 2001 by Government in response to the threat of
climate change, to accelerate the move to a low carbon economy by working with organisations to reduce
carbon emissions and develop commercial low carbon technologies.
Aims: The Carbon Trust's mission is to accelerate the move to a low carbon economy now and develop
commercial low carbon technologies for the future.
They cut carbon emissions by providing business and the public sector with expert advice, finance and
certification to help them reduce their carbon footprint and to stimulate demand for low carbon
products and services. The trust claims to have saved over 17 million tonnes of carbon, delivering costs
savings of over £1billion.
The trust aims in future to cut carbon emissions by developing new low carbon technologies. They do
this through project funding and management, investment and collaboration and by identifying market
barriers and practical ways to overcome them. The work on commercialising new technologies will save
over 20 million tonnes of carbon a year by 2050.
Advertising and Awareness Campaigns
There have been many TV and radio campaigns as well as poster campaigns to raise awareness of climate
change. E.g. act on CO2.
-Act on CO2 Website
Provides information for individuals and business on how to reduce C02 emissions.
Local:
Car Sharing Schemes / Cycle to work schemes / Walking Buses
Many car sharing schemes have been set up where people share a lift into work or share a journey so
there are less cars on the daily commute that only contain one passenger. To take this further the
government has proposed incentive schemes whereby one lane on the motorways (such as the hard
shoulder) can be only used by those lift sharing. There are also many local ride to work schemes
whereby the government subsidises cycle equipment by offering individuals who buy bikes tax breaks on
73
their salary as was as a small monthly deduction from their wage. This is often set up for free by local
bike stores and there are many different types of scheme.
Home Energy Monitor
Online facilities are now offered by energy companies that allow homeowners to monitor their energy
consumption and carbon emissions in order to save both money and reduce CO2 emissions.
Some energy companies also provide energy monitors that can be plugged in at home that monitor energy
consumption and help raise customer awareness of energy wastage.
School Culture of Energy Saving
The education is doing its bit to increase awareness of the climate change issue and reduce the amount
of energy wasted and hence amount of unnecessary CO2 released. This can happen as early as primary
school such as Pendle Primary School, Clitheroe where students as young as five are elected as light
monitor and are responsible for turning the lights off in classrooms at break time.
Free energy saving light bulbs
Many local councils have government grants made available that they have used to buy energy saving
light bulbs which can be collected for free from local police stations. Energy companies such British Gas
have also provided their customers with energy saving bulbs. Obviously these companies are pressured
into doing this as they have a corporate responsibility to encourage a sustainable future.
Grants for improving energy efficiency
Local councils have grants available that local residents can apply for to help fund/part fund energy
saving improvements such as cavity wall and loft insulation. Blackburn Council for instance has money
available from central government that helps people on certain benefits improve the energy efficiency
of their homes. The grants can be between £2700 -£4000 and can be used to upgrade heating systems.
Householders in Blackburn can also apply for grants of up to £2,500 per property towards the cost of
installing a certified renewable technology product by a certified installer. Grants are also available in
the Blackburn with Darwen area for the installation of technologies in public sector buildings, not-forprofit organisations and charitable bodies. They can apply for between 30% and 50% of the installation
costs of approved technologies. There is a maximum of £1 million in grant funds available per site.
In Lancashire all residents can get detailed advice on energy efficiency saving solutions from the
Lancashire Energy Advice Centre (Blackburn Town Hall)
Recycling Message
Schemes that encourage recycling such as reduce, re-use, recycle cuts the waste of plastics, metals and
paper therefore saving emissions in manufacturing new materials.
74
8. Climate on a local scale: Urban Climates
Climatic regions by definition are large areas characterised by similar climatic conditions that persist
over time, in reality they are not entirely homogeneous or constant. In order to understand climate on a
local scale geographers refer to the term microclimate which involves the study of the climate on a
much smaller scale. For instance you could study the differences in microclimate between a large deep
valley compared to a high altitude mountain range or compare the microclimate of a large conurbation
with that of its ‗more rural‘ hinterland.
Urban climates are arguably the most interesting microclimates as human interference has effects on
the climate on a local scale that then has an impact on the lifestyle / quality of life for the population
living in such cities.
(a) Temperature:
“Why are urban areas hotter than rural areas?”
E.g. Manchester can be 2oC hotter than surrounding
rural areas, why?
Temperature in cities is primarily controlled by:
Atmospheric Composition
Air pollution in cities makes light transmission leaving
the city surface (while rebounding back to space)
significantly less than nearby rural areas which have lower levels of pollution e.g. in Detroit
(manufacturing region), USA there is 9% less transmission of reflected radiation back to space
increasing to 25% less transmission of light on a calm day.
Daytime heating of the boundary layer occurs because aerosols (pollution) absorb solar radiation during
the day but this does not have as much effect up to the mean roof level (urban canopy layer). Incoming
solar radiation (short wave) is actually reduced by pollution but is counterbalanced (offset) by the lower
albedo (i.e. surfaces are darker and tend to absorb heat, therefore causing warming). Also cities have a
feature referred to as urban canyons (shape and arrangement of buildings) which effectively increases
the surface area of the area being heated as cities have a greater surface area of material that can be
potentially heated compared to those in the rural.
Urban Surfaces
The nature of the urban surface
controls how well it heats up:
-Character – i.e. type of
surface, some surfaces heat up
better than others and also
have higher heat capacities
leading
to
hotter
city
temperatures;
-Density of urban surface – i.e.
total surface area of structures
as well as building geometry
(shape) and arrangement.
City centres have relatively high heat absorption and therefore high temperatures; however at street
level the readings can be confusing and are often lower than expected due to shading from tall buildings.
75
The geometry of urban canyons is important as it effectively increases the surface area by trapping
multiple reflected short wave radiation and also reduces reflection from the surface back to space as
there is a restricted “sky view” again due to building geometry.
Anthropogenic Heat Production
Traffic (cars and other vehicles), industry, buildings and even people release heat. Amazingly these
causes release similar levels of heat energy than that of incoming solar radiation in winter!
In the year 2000 the Boston-Washington DC Megapolis (great city) had an estimated 56 million
residents inhabiting a land area of 32, 000km 2 - which produced enough anthropogenic heat to account
for an equivalent of 50% of the winter radiation and 15% of the total summer radiation!
In the Arctic regions anthropogenic heat provides enough heat to provide a positive heat balance
otherwise conditions in such arctic urban areas would be much cooler.
Urban Heat Island Effect
The NET effect of urban thermal processes (human activity) is to make urban temperatures
considerably greater than those in surrounding rural areas.
The greatest effect occurs in the mid latitudes while under the influence of clear and calm conditions
(anticyclones) which prevents cloud formation and cloud cover.
The result is rural areas become
disproportionately cooler and make the effect appear more apparent.
 Factors contributing to urban heat islands:
1) Thermal heat capacity of urban structures is high. Canyon geometry – dominates the canopy
layer by heating from conduction and convection from buildings loosing heat and traffic;
2) By day there is absorption of short-wavelength radiation by pollution;
3) Less wind in urban areas due to more shelter from urban canyons therefore less heat
dissipation;
4) Less moisture in local atmosphere in cities due to there being less vegetation and quicker run-off
meaning there is less heat needed for evaporation of this moisture present so the remaining heat
not used up by the evaporation process has the effect of increasing temperatures.
The result of the above is that average urban temperatures can be 5-6 oC warmer than rural
areas and 6-8 oC warmer in the early hours of the morning during calm nights as the city radiates
the heat it absorbed during the day. It is heat loss from buildings that is by far the greatest
factor in controlling urban heat islands.
Urban heat islands show the greatest increase in mean temperatures in the largest cities that have
undergone huge population growth, for instance Osaka, Japan has a high population density which many
attribute as a cause for a 2.6 oC temperature rise over the last 100 years as well as it having very tall
buildings. Similarly many North American cities show a similar temperature rise and show the greatest
temperature difference between rural and urban environments – up to 12 oC for American cities with a
population over 1 million people.
European cities show a much smaller temperature difference as
buildings here are generally lower and have shallower urban canyons. It is also now recognised that
urban population density has a greater effect on the city temperatures compared to simply city
population size i.e. higher population densities such as Tokyo (12 million population) would have a
greater temperature rise than a similar sized city with a lower population density such as Illinois (13
million population) for example.
76
Case Study: London 1930-1960 average temperatures
City Centre 11oC
Suburbs 10.3 oC
Countryside 9.6 oC
Calculations suggest that London‘s domestic fuel use in
the 1950‘s increased temperatures by 0.6 oC on
average in winter. The regional wind speed needs to be
low and the city sheltered by topography for a heat
island to operate effectively. The heat island might be
so great that it may generate its own inward spiralling
wind systems at the surface.
Consequences of heat islands:
Large Cities tend to suffer badly during heat wave conditions when tarmac, paved surfaces and
bricks heat up and retain heat at night time making the effects of heat wave conditions worse.
For instance in 1987 Athens suffered tragic consequences through this process and hundreds
died through heat stress and dehydration.
Snow tends to lie for less time in city centres and near major roads in town centres but may lie
for longer in city centre parks where the surface materials are grass and heat up less quickly.
This might have possible ramifications for the continuation of transport development.
As discussed earlier heat islands might have a positive effect of actually making it possible to
inhabit inhospitable arctic areas in winter.
(b) Precipitation
Temperatures are generally higher in cities and therefore the air in cities can hold more moistures than
that in cooler rural areas and relative humidity levels are subsequently 6% lower in cities. Usually there
is less vegetation cover in cities and less surface water stores meaning lower evapotranspiration rates.
In terms of cloud cover cities often experience thicker cloud cover which is more frequent than that
experienced in rural areas. This is mainly because of convection currents are deflected upwards above
cities causing condensation and cooling as they rise above the urban boundary layer (area affected by
urban surface). Also cities contain more sooty particulate matter form factories and exhaust fumes
which form cloud forming nuclei in the atmosphere above cities.
Excess cloud cover helps to explain
why on average large cities are 5-15% wetter based on their average rainfall totals.
In addition 30-60km downwind of a large city such areas receive on average 1/3 more monthly
precipitation than areas upwind of large cities. This can be explained by urban heat islands causing
increased air moisture content as temperatures are warmer and because evaporation of water from
gardens and cooling towers of power stations allows more moisture to be carried by the prevailing winds
leaving the city. It takes time for moisture to condense to sufficient size to fall as rain, so therefore
rain is heavier downwind of cities.
(Left) Photochemical Smog over Athens
Cities also suffer from a 400% increased probability of
hail storms resulting from intense convectional uplift
from rapidly warming man made surfaces such as tarmac
and concrete.
Cities also are 25% more likely from
suffering thunderstorms in the summer for the same
reason e.g. London, in the Northern Suburbs
thunderstorms are more likely as rising thermals are
encouraged by ridges of high ground. Cities are also more
likely to suffer from thicker and more frequent fogs as
firstly they release more pollutants from industry and
77
transport resulting in more particulate matter in the atmosphere. This particulate matter also
encourages fog formation as it allows condensation nuclii for water droplets, this linked with a greater
chance of calm conditions (due to urban canyons providing shelter) makes fogs a persistant problem
especially under anticyclonic conditions when air is still preventing fogs and smogs from being blown
away (e.g. Athens, Mexico city and Manchester pre 1970‘s due to burning vast amounts of coal).
Modern smog does not usually come from coal but from vehicular and industrial emissions that are acted
on in the atmosphere by sunlight to form secondary pollutants that also combine with the primary
emissions to form a form of man-made low level ozone called photochemical smog (e.g. Athens).
Case Study: Los Angeles
The smog that occurs is a result of a combination of a number of factors. The various forms of pollution
from vehicles (8 million in LA), industry and power stations become trapped in the lower atmosphere
due to the occurrence of a temperature inversion. This is a phenomenon which occurs during the summer
months prevents mixing of the upper and lower atmosphere trapping the pollutants. The pollution
consists of nitrogen oxides, ozone, sulphur dioxide, hydrocarbons and various other gases, brush fires
can add even more pollution to the atmosphere. The pollution exacerbates breathing problems such as
asthma and causes a huge increase in the
number of breathing associated admissions to
casualty and may even result death in very
sensitive or unwell people. City dwellers often
become upset by the high level of pollution due
to the risk to health that it poses, however
most people are also unwilling to give up their
car to help reduce pollution! The response of
the city government is to impose restrictions on
emissions by industry and cars, but many of the
large companies fear impact on their profits and
therefore prevent any effective cuts from
being made. Overall it seems as though the
political will to make a difference is not there.
(c) Air Quality
The quality of air in urban areas invariably is poorer than that of rural areas. Although in the UK air
quality has improved since the decline of the manufacturing industry in the 1970‘s as well as the
requirement for vehicles to be fitted with catalytic convertors. This said cities still have on average 7
times more dust in the atmosphere than those rural surrounding areas. The main factors contributing
to such particulate solid matter and gaseous pollutants is combustion (burning) of fossil fuels in power
stations and by private / public transport. Cities tend to have 200 times more sulphur dioxide (SO 2), 10
times more nitrogen dioxide (NO 2), 10 times more hydrocarbons and twice as much carbon dioxide (CO 2),
These anthropogenic pollutants increase likelihood of cloud cover and precipitation as well as increasing
the chance of photochemical smogs. All of the above absorb and retain more heat from incoming solar
radiation but conversely reduce the sunlight levels in cities.
Sulphur dioxide and nitrogen dioxide which are both considerably higher in urban areas cause acid rain
and can pollute areas downwind of major industrial cities and even major industrial countries (can have a
huge effect down prevailing wind e.g. UK acid rain blows over Scandinavia and causes acid rain over the
continent).
78
There are primary pollutants that usually come from burning fossil fuels e.g. carbon monoxide from
incomplete combustion from car exhausts and secondary pollutants that combine with other molecules in
the atmosphere and can be acted on by sunlight to form new more poisonous substances such as the
formation of photochemical smog.
The main atmospheric pollutants are:
Sulphur Oxides (SOx): especially SO2 is released from burning fossil fuels.
Nitrogen Oxides (NOx): especially NO2 is released from high temperature combustion and can
be recognised as the factor causing the brown haze in the plume downwind of cities.
Carbon Monoxide (CO): is a poisonous gas released by incomplete combustion of hydrocarbons
or even wood with the main contributor being vehicle exhausts.
Carbon Dioxide (CO2): Odourless and colourless greenhouse gas emitted from combustion.
Volatile Organic Compounds (VOC‟s): Released from hydrocarbons and some solvents such as
paints.
Particulate Matter: is a measure of smoke and dust content of the atmosphere measures in
PM10 which is a particle 10 µm (microns) in diameter. Smaller than this will enter nasal cavity and
ultra fine particles less than 2.5 µm will enter the bronchial tubes in the lungs and cause
respiratory problems.
Unfortunately the highest levels of particulate matter occur
in developing countries where legislation on emissions is not
very strict such as India and China.
These are also the
manufacturing centres for the World.
To compound
problems in such LDC‘s they are also the countries that are
witnessing the most rapid population growth and already have
the highest populations.
This means that an increasing
proportion of the world‘s population are at risk form
particulate matter induced health problems (respiratory
problems) which will reduce life expectancy in these
developing countres and it is the poorest in the developing
world who suffer the costs of cheaply manufactured
products (that the developed World demands) the most.
(d) Winds
Urban structures have a significant effect on the microclimate in terms of wind patterns and wind
speeds.
(a) Air flowing in narrow streets: Causes air pollution as
turbulent eddies pick up dust and particulate matter reducing
air quality.
(b) Tall buildings: may create a downwash effect in the lee
(sheltered side) so that emissions from chimneys high up at
urban canopy level can‘t escape and become trapped at ground level.
Obviously the amount of pollution
will depend on the meteorological
conditions at the time which
control either air turbulence or
subsidence.
79
(c) Air flowing narrow streets – Venturi Effect: is the effect of higher wind speed caused by
narrowing of streets creating ―wind tunnels‖.
Building also cause deflections to wind which causes circulations about tall buildings. These are a
collective set of unpleasant effects that tightly packed city architecture suffers regularly from.
Manchester‘s most recent Spinningfields retail and leisure development just off Deansgate has been
plagued by these effects so much so that the purpose built pavement cafes here are very unpleasant to
use indeed unless you like dust or worse in your Mocha!
Developers must therefore try to
reduce the effects of spiralling air
dynamics if they want shoppers to linger
for longer and therefore spend money.
If developers can‘t build the structures
any smaller they sometimes try to build
the building on a pedestal of one or two
stories so it is the pedestal that suffers
the unpleasant winds and not the
entrances to shopping malls that often
suffer the worst effects of wind
tunnelling and increased wind speed.
Manchester Spinning Fields Development – A Venturi Nightmare!
80
C
hapter 2: Plate Tectonics
1. Plate Tectonic Theory
This states that the Earth's outermost layer, the lithosphere, is fragmented into eight or more large
and small plates which float upon more mobile material underneath called the asthenosphere. Rates of
movement range from 10mm/year to 100mm/year and average around 70mm/year (about the speed at
which your finger nails grow). The Earth's tectonic, volcanic and seismic (earthquake) activity is related
to the movement between neighbouring plates. The narrow zone where plates meet is termed a plate
boundary.
There are three types of plate boundary:
Constructive or divergent plate boundary
Destructive or convergent plate boundary
Conservative or transform plate boundary
But this theory was not always believed to be true and only through a series of discoveries has the plate
tectonic theory been developed.
World Map showing major tectonic plates and types of plate boundary
2. History of Plate Tectonics
The the theory we now call plate tectonics has emerged from a long and complicated history. Below is an
outline to show how this theory has developed:
(i) Continental drift - The theory of continental drift was put forward in 1912, by the German
meteorologist Alfred Wegener. He put forward lots of evidence to show that the continents had moved
but provided no hint of a mechanism that would answer the question why.
81
(ii) Sea-floor spreading - Through the 1940's, 1950's and 1960's scientists started to study the ocean
floor in detail. During this time the Mid-Atalntic Ridge was discovered, and along with studies of
palaeomagnetism and the dating of the age of the rocks at the middle of the Atlantic, the theory of
sea-floor spreading was put forward by F.J Vine and D.H Matthews. This was the mechanism that
helped explain continental drift.
(iii) Plate Tectonics - After the 1964 Alaskan Earthquake the process of subduction of oceanic crust
was proven, which led to the development of the theory of plate tectonics. This theory is now virtually
universally accepted, but which may still be modified following further investigation and study.
3. Characteristics of Tectonic Plates
Plates are the rigid, solid blocks of the lithosphere between 100-300 km thick. They are thinner below
the oceans, especially in the vicinity of mid-oceanic ridges, and thickest in certain continental areas,
including Antarctica, North Africa and Brazil.
These lithospheric plates are part crust, either thin (6-10 km), dense basaltic oceanic crust or
thicker (35-70 km), lighter granitic continental crust, and part upper mantle. They move over the
more mobile and plastic asthenosphere (5% molten) below due to deep seated convection currents in the
Mantle.
These plates vary in size (7 or 8 major plates and several minor ones) and some are entirely oceanic
lithosphere (Nazca plate) and some partly continental lithosphere and oceanic lithosphere (Eurasian
plate).
4. Evidence for Plate Tectonics
The first ideas behind plate tectonic theory were introduced by Alfred Wegener (a German
meteorologist) in 1912. He believed that there was once a ''super continent'', called Pangea.
He then believed that the Pangea began to split into two large landmasses, Gondwanaland (Southern
hemisphere) and Laurasia (Northern hemisphere).These landmasses then continued to split forming the
continents we know today. Wegner called this, the Theory of Continental Drift.
82
Wegner's theory was based on a number of pieces of evidence that appeared more than coincidences to
him.
a. Continental Fit:
The most obvious piece of evidence that occurred to Wegener was
the amazing fit of S-America and Africa showing that these to
continents were probably part of the same continental block at some
time in the past.
So, how do we identify the true edge of a continent today? Usually, the edge of a continent
is defined as being halfway down the steep outer face of the continent, that is, the
continental slope. When we try to fit the continents together, we should fit them along this
line the true edge of the continent rather than along the present -day coastline. This example
was constructed visually, but today this kind of map is usually drawn by computers
programmed to find the best fit between the continents, usually at about 3000 fathoms (Ft)
known as the 3000 fathom isobath (roughly 1000m). In the case of Africa and South
America, the fit is remarkable; in the "best-fit" position, the average gap or overlap between
the two continents is only 90 km (56 mi). Interestingly, the most significant overlapping
areas consist of relatively large volumes of sedimentary or volcanic rocks that were formed
after the time when the continents are thought to have split apart!
b. Geological Evidence:
Wegener noticed certain similarities in rock types across numerous continents. Amazingly the rocks in
the Appalachian Mountains in N-America were of exactly the same age, same rock types and aligned
(i.e. same trend) with mountains in Britain and Norway, suggesting that these continents were once
together. Rocks in S-America and Africa also showed similarities.
c. Climatological Evidence:
Wegener also noticed that there were certain rock types that you would not expect to find in countries
with certain climate types. For example evidence of glaciation, including till deposits, which you would
normally expect to occur in cold countries, has been found in Africa and Australia. This would suggest
that Africa and Australia have changed their positions over time and were probably nearer to
Antarctica. In Britain and in Antarctica coal deposits have been found indicating that both places were
probably over the equator at some point in the past!
83
d. Fossil Evidence:
A small freshwater reptile called a Mesosaur has had its fossil remains found on the east coast of South
America and on the west coast of Africa. Measosaurus was a freshwater reptile with limited swimming
ability and would have been incapable of swimming the distance which now exists between Africa and
South America, suggesting that these continents were once much closer together. Also the discovery
of fossils of tropical plants (in the form of coal deposits) in Antarctica led to the conclusion that this
frozen land previously must have been situated closer to the equator, in a more temperate climate where
lush, swampy vegetation could grow. Other mismatches of geology and climate included distinctive fossil
ferns (Glossopteris) discovered in all of the southern continents. Glossopteris was warm loving plant and
had large seeds (therefore seeds could not be spread vast distances), suggesting that all these
continents were once joined together. Glossopteris is found in coal seams of the southern continents,
this in itself is important evidence of movement as coal forms in warn tropical conditions similar to those
of the Everglades in Florida, currently forming today. However, Antarctic has a Glacial Alpine climate
now so at one time must have been much further north nearer the equator.
However, Wegener could not explain the mechanism that would cause the continents to move apart, and
people refused to believe his theory. Sadly for Wegner, it was not until long after his death that people
used his evidence to prove the theory of continental drift and then plate tectonics.
The next step in developing plate tectonic theory was the introduction of the theory of sea floor
spreading. F.J Vine and D.H Matthews developed this theory, collecting all the evidence that suggests
sea floor spreading occurs to produce their own theory. There are several pieces of evidence, which
support this theory:
e. The Discovery of the Mid Atlantic Ridge:
Ocean Floor Mapping in the 1940's: Originally the ocean floor was thought to be a flat surface, but due
to bathymetric surveys (surveys of the sea floor), where sound waves are bounced off the sea floor to
84
gauge its depth, the ocean was found to have a more rugged surface. While investigating islands in the
Atlantic in 1948 Maurice Ewing noted the presence of a continuous mountain range extending the whole
length of the ocean bed. This was the discovery of the Mid-Atlantic Ridge, a ridge 1000km wide and
2500m high. Ewing also noted that the rocks were volcanic and recent in origin.
f. Palaeomagnetism:
As has been known for centuries, the Earth is
magnetic, with poles at either end. These poles
are close to (but not exactly coincident with)
the Earth's axis of rotation, and both its
position around the geographic pole, and the
strength of the magnetic field, vary. The earth
therefore has what scientists call a dipole field.
What causes the magnetic field is not known for
sure. It is most likely that it is related to
movements in the iron-rich, partially-liquid,
outer core. Convection within the core would set
up a dynamo effect (geodynamo), which would
produce a magnetic field. This field acts
through and over the surface of the planet, and
beyond.
Polar Wandering Curves
Magnetic compass needles point toward their opposite pole (remember opposites attract), as do the
magnetic components of iron-rich, molten rock. Within lava, these are magnetite crystals, which become
permanently magnetised in the direction of the Earth's magnetic pole when it cools and hardens into
rock. The temperature at which a material loses its magnetic properties is called the Curie point,
therefore in order for a crystal of magnetite to become aligned with the Earth‘s magnetic field it must
cool
past
this
temperature
which
is
usually
570oC
for
magnetite.
Thus, as lava flows and sets, it 'records' the relative direction of the magnetic pole at the time it
solidifies. Furthermore, because the magnetic field acts through the planet (rather then just over its
85
surface), the angle of dip (the inclination) of the magnetite crystals record the distance from the pole,
which thus indicates the latitude at which the rock formed.
If either the solid rock, or the pole, moves, the original orientation and inclination of the magnetic pole
with respect to the rock is retained within the magnetite of the rock. This is termed remnant
magnetism.
The inclination of magnetite crystals in rocks indicate the latitude at which they form at (see diagram
above), 90 degrees inclination at the poles and zero degrees at the equator. From magnetic inclination
recorded in basaltic rocks, diagrams known as apparent polar wandering curves have been plotted for
the continents. These diagrams are plotted assuming no drift of the continents have occurred but
instead what is plotted is the ‗apparent‘ movement of the poles through time. It is most likely that the
poles remain stationary but it is the position of the continents that have moved. The old pole
(palaeopole) positions are plotted for different age rock samples and when these points are connected
with a line they a polar wandering curve is constructed.
The left hand diagram below shows the polar wandering positions for the continent Eurasia over the last
500 Million Year (Ma), where the poles appear to move steadily northwards from the equator to the
curent geographical north pole. One curve alone would not imply evidence that the continents had
drifted but when a second curve is added (right hand diagram) for North America a similar movement
north is indicated. The main difference in the curves occurs at 100 – 200 Ma, suggesting that the
continents drifted apart during this time.
86
(a) Apparent Polar Wandering curve for Eurasia
(b) Apparent Polar Wandering curves for Eurasia and N America.
Palaeomagnetic Anomolies in the 1950's: Studies of the magnetism of the rocks of the sea floor were
carried out by towing sensitive instruments (magnetometers) behind research vessels. These began to
reveal a curious striped pattern of magnetism on the sea-floor. Half the stripes were magnetised in the
direction of the present magnetic field but half were magnetised in the reverse direction. What these
vessels were recording is known as palaeomagnetism ("old magnetism"). Volcanic rocks such as basalts
formed at mid ocean ridges have cooled from magma (molten rock). They contain tiny iron minerals (such
as magnetite) in them that line up with the Earth's magnetic field at the time they were formed. Once
these volanic rocks become solid the iron minerals in them cannot move and therefore record the
direction of the Earth's magnetic field in the past. Such rocks forming today have their iron minerals
aligned with the present day magnetic field (magnetised with normal polarity) and are seen as high
intensity magnetism on a magnetometer reading (see in the diagram below). However, the direction of
the Earth's magnetic field has flipped many times in the past c.4-5 times every million years on
average (although it is not known why), and volcanic rocks that formed when the magnetic field was
reversed are seen as low intensity magnetism on a magnetometer reading (again see diagram below). As
you can see from the diagram below these readings are totally symmetrical either side of the oceanic
ridges.
87
The less than straight pattern of magnetic
stripes in the diagram above is due to unequal
spreading rates on different sections of the
ridge.
Sea floor spreading in the 1960's: In 1962, Harry Hess studied the age of the rocks from the middle
of the Atlantic outwards to the coast of North America. He confirmed that the newest rocks were in
the centre of the ocean, and were still being formed in Iceland, and the oldest rocks were those at the
outside edges of the Atlantic Ocean. He also noted that the ages were symmetrical about the midoceanic ridges.
All this new information from the ocean floor was elegantly explained in 1963 by two British geologists,
Fred Vine and Drummond Matthews, who suggested the explanation of mid-ocean "tape recorder" (see
diagram above). As new ocean floor is added at the mid-oceanic ridges it is magnetised according to the
88
direction of the Earth's magnetic field, and it retains this magnetisation as it moves away. Reversely
magnetised stripes represent ocean floor formed at times when the Earth's field was in the reverse
direction. Although this theory of sea floor spreading explained Wegener's idea of "continental drift"
there was one main difficulty, which was the implication that the Earth must be increasing in size with
all this generation of new sea floor. Since this is not so, evidence was needed to show that elsewhere
parts of the oceanic lithosphere were being destroyed.
Subduction Zones: The discovery that deep ocean trenches led down into huge fault lines known as
subduction zones where oceanic lithosphere was being destroyed came about after the 1964 Alaskan
earthquake and was the last piece of the jig-saw of the theory of plate tectonics. This theory is now
universally accepted and is outlined in the diagram over page.
Subduction at a cotinental – ocean collision.
(g) Mantle Tomography
Seismic tomography is a method of using seismic waves from earthquakes (plus some other data) to
create 3D images of the mantle. These studies pick out areas of fast or slow mantle, which correspond to
areas of high and low temperature. Areas through which waves move quickly tend to be cooler (blue) or
consist of denser rock. Areas where waves move slowly indicate warmer (magenta) or less dense rock. From
these studies you can actually see subducting slabs and upwelling at mid-ocean ridges. Below is a seismic
tomographic map at 50km depth. You can see features such as the East-Pacific rise and the Indian ridge
systems, the volcanoes around the Pacific rim and even the Hawaiian hotspot (just), all of which show up
as magenta
Seismic Tomographic Map of the World
(h) Gravity Anomalies
89
More traditional evidence comes from gravity studies. The gravity anomaly over a subduction zone shows
many features. The figure below shows a typical gravity anomaly. A high gravity anomaly shows area of
high density (excess of mass), whereas a low anomaly shows areas of low density (mass deficiency).
i) Volcanoes j) Earthquakes
Global Pattern of Plate Tectonics
Scientists now have a fairly good understanding of how the plates move and how such movements relate
to earthquake activity. Most movement occurs along narrow zones between plates where the results of
plate-tectonic forces are most evident.
There are three types of plate boundaries:
Divergent boundaries - where new crust is generated as the plates pull away from each other.
Convergent boundaries - where crust is destroyed as one plate dives under another.
Transform boundaries - where crust is neither produced nor destroyed as the plates slide
horizontally past each other.
Note: also there are broad belts in which boundaries are not well defined and the effects of plate
interaction are unclear, these are known as plate boundary zones.
5. Constructive Plate Margins
90
Constructive plate boundaries occur along spreading centres where plates are moving apart and new
crust is created by magma rising up from the mantle. Picture two giant conveyor belts, facing each other
but slowly moving in opposite directions as they transport newly formed oceanic crust away from the
ridge crest.
Perhaps the best known of the divergent boundaries is the Mid-Atlantic Ridge. This submerged
mountain range, which extends from the Arctic Ocean to beyond the southern tip of Africa, is but one
segment of the global mid-ocean ridge system that encircles the Earth (approx. 70,000 km in length).
The rate of spreading along the Mid-Atlantic Ridge averages about 2.5 cm/yr, or 25 km in a million
years. This rate may seem slow by human standards, but because this process has been going on for
millions of years, it has resulted in plate movement of thousands of kilometres. Sea-floor spreading
over the past 100 to 200 million years has caused the Atlantic Ocean to grow from a tiny inlet of water
between the continents of Europe, Africa, and the Americas into the vast ocean that exists today.
Mid-oceanic ridges are commonly between 1000 to 1500km wide, and individual peaks may rise some
3000m from the ocean floor. The ridges have an irregular pattern, being offset by a series of faults,
running at right angles to the main ridge. These are known as transform faults. These faults are the
produce of uneven spreading rates along ridge systems that are not straight as a result. In effect the
transform fault will ensure even spreading from the ridge when it changes direction.
Down the centre of these mid-oceanic ridges are deep rift valleys, formed as the brittle lithosphere
pulls apart allowing a middle section to be step faulted downwards. Along the floor of the rift valley
linear cracks known as fissures open up under tension allowing magma to reach the surface as lava flows.
Diagram showing a typical Mid Ocean Ridge System.
As two oceanic plates (lithosphere) pull apart, in a process referred to as sea-floor spreading, the
lithosphere is thinned and the underlying asthenosphere moves upwards to fill the space. This upwelling
brings the asthenosphere nearer to the surface which means it has less pressure acting on it from
above, and causes it to start to melt. This is a process known as decompression melting. Partial melting
of asthenosphere material forms a basaltic magma. Since this molten material is very hot it is lighter
than the surrounding rocks and begins to rise through weaknesses in the lithosphere, such as fissures.
Due to the hot and runny nature of this molten material formed at constructive plate margins much of it
91
flows out of these linear fissures to form extensive basaltic lava flows. Eventually the lava cools and
solidifies in these fissures to create more oceanic lithosphere (and the sea-floor spreads....).
Some of this rising magma reaches the surface through single, tube like weaknesses known as vents to
form shield volcanoes. Icelandic shield volcanoes are made up of basalt lava flows. The low viscosity and
runny nature of the basalt means these volcanoes have slopes of less than 10º in angle, and spread out
for many kilometres. A classic example is Skjaldbreidur, in Iceland, which has uniform slopes of 7 - 8º,
is 600m high, and has a diameter of about 10km.
The volcanic country of Iceland, which straddles the Mid-Atlantic Ridge, offers scientists a natural
laboratory for studying on land the processes also occurring along the submerged parts of a spreading
ridge. Iceland is splitting along the spreading center between the North American and Eurasian Plates,
as North America moves westward relative to Eurasia.
Map showing the Mid-Atlantic Ridge splitting
Iceland and separating the North American and
Eurasian Plates. The map also shows Reykjavik,
the capital of Iceland, the Thingvellir area, and
the locations of some of Iceland's active
volcanoes (red triangles), including Krafla.
The consequences of plate movement are easy to see around Krafla Volcano, in the northeastern part of
Iceland. Here, existing ground cracks have widened and new ones appear every few months. From 1975
to 1984, numerous episodes of rifting (surface cracking) took place along the Krafla fissure zone. Some
of these rifting events were accompanied by volcanic activity; the ground would gradually rise 1-2 m
before abruptly dropping, signalling an impending eruption. Between 1975 and 1984, the displacements
caused by rifting totalled about 7 m.
92
Constructive plate margins have associated seismic activity as well as volcanism. The earthquakes at the
mid-oceanic ridges are always shallow focus. They appear to have three main causes:
small earthquakes (< 6 on Richter scale) are associated with rising magma from the
asthenosphere
small earthquakes are also located along the step faulting associated with the fracturing of the
brittle lithosphere as it is pulled apart during the formation of rift valleys
large earthquakes (> 7 on Richter scale) occur along the transform faults that offset the ridge,
and produce its characteristic irregular pattern. Although the two sections of the lithosphere
are pulling away from the ridge as a result of sea-floor spreading, they will actually be sliding
past one another along the transform faults. The friction caused by the movement will cause
large shallow focus earthquales similar to those formed at conservative plate margins.
6. Destructive Plate Margins
The size of the Earth has not changed significantly during the past 600 million years, and very likely not
since shortly after its formation 4.6 billion years ago. The Earth's unchanging size implies that the crust
must be destroyed at about the same rate as it is being created. Such destruction (recycling) of crust
takes place along convergent boundaries where plates are moving toward each other, and sometimes one
plate sinks (subducted) under another. The location where sinking of a plate occurs is called a subduction
zone.
The type of convergence that takes place between plates depends on the kind of lithosphere involved.
Convergence can occur between an oceanic and a largely continental plate, or between two largely oceanic
plates, or between two largely continental plates.
(a) Oceanic-continental convergence
If by magic we could pull a plug and drain the Pacific Ocean, we would see a most amazing sight - a
number of long narrow, curving oceanic trenches thousands of kilometers long and 8 to 10 km deep
cutting into the ocean floor. Trenches are the deepest parts of the ocean floor and are created by
subduction. Oceanic lithosphere subducts underneath continental lithosphere due to the differences in
their average densities (3.0g/cm³ for oceanic lithosphere and 2.7g/cm³ for continental lithosphere).
Off the coast of South America along the Peru-Chile trench, the oceanic Nazca Plate is pushing into and
being subducted under the continental part of the South American Plate. In turn, the overriding South
American Plate is being lifted up, creating the towering Andes fold mountains, the backbone of the
continent. Strong, destructive earthquakes and the rapid uplift of mountain ranges are common in this
region.
Earthquakes have a more complex pattern at destructive plate margins, than the simple shallow
earthquakes at constructive plate margins. The inclined zone in which shallow, intermediate and deep
93
focus earthquakes are recorded in a subduction zone is known as a Benioff zone. The earthquakes in
this zone appear to be due to different causes:
Shallow, intermediate and deep focus earthquakes occur along the descending slab as it
overcomes friction to force its way into the asthenosphere below.
Destructive shallow focus earthquakes also occur as the overlying plate is buckled by the
descending plate causing it to be released periodically forming a megathrust earthquake. See
diagram below.
Oceanic-continental convergence also sustains many of the Earth's active volcanoes, such as those in
the Andes and the Cascade Range in the Pacific Northwest, such as Mt. St. Helens. However, the origin
of magma erupted at destructive plate margins is much more complex than that at constructive plate
margins. The simplest view is that magma is derived from partial melting of the descending oceanic
lithosphere in the subduction zone. It is also thought that seawater trapped in the descending
lithosphere helps to lower the melting points of these rocks allowing them to melt. This process is known
as hydration melting, and is a second important way magma can be formed within the earth without the
need for the melting zone to be heated up.
Hot molten material is much lighter than the surrounding rocks and rises up through the lithosphere.
This molten material tends to accumulate in reservoirs or chambers below the Earth's surface,
eventually making its way upwards through weaknesses in the overlying lithosphere to form volcanoes.
A much wider variety of lavas are produced at destructive plate margins. In general they tend to have
higher silica content (such as andesitic lava) than those at constructive margins, and thus are more
violently explosive and hazardous at destructive margins. This is due, in part, to greater levels of silica
increasing the viscosity of the magma, as well as increased gas content. A more viscous (thicker) magma
means gases dissolved in it cannot grow and expand as the magma rises to the surface. This puts the
magma under huge pressures which eventually lead to an explosive volcanic eruption.
94
Steep-sided cone shaped volcano
Explosive and hazardous eruption
The high degree of explosivity and thick viscous andesitic lavas produced at destructive plate margins
forms steep-sided cone shaped volcanoes, such as Mt. St. Helens in USA and Popocatepetl in Mexico.
(b) Oceanic-oceanic convergence
As with oceanic-continental convergence, when two oceanic plates converge, one is usually subducted
under the other, and in the process an oceanic trench is formed. The Marianas Trench (paralleling the
Mariana Islands), for example, marks where the fast-moving Pacific Plate converges against the slower
moving Philippine Plate. The Challenger Deep, at the southern end of the Marianas Trench, plunges
deeper into the Earth's interior (nearly 11,000 m) than Mount Everest, the world's tallest mountain,
rises above sea level (about 8,854 m).
Subduction processes in oceanic-oceanic plate convergence also result in the formation of volcanoes.
Over millions of years, the erupted lava and volcanic debris pile up on the ocean floor until a submarine
volcano rises above sea level to form an island volcano. Such volcanoes are typically strung out in chains
called island arcs. As the name implies, volcanic island arcs, which closely parallel the trenches, are
generally curved. The trenches are the key to understanding how island arcs such as the Marianas and
the Aleutian Islands have formed and why they experience numerous strong earthquakes. Magmas that
form island arcs are produced by the partial melting of the descending plate and/or the overlying
oceanic lithosphere. The descending plate also provides a source of stress as the two plates interact,
leading to frequent moderate to strong earthquakes.
(c) Continental-continental convergence
The Himalayan mountain range dramatically demonstrates one of the most visible and spectacular
consequences of plate tectonics. When two continents meet head-on, neither is subducted because the
continental rocks are relatively light and, like two colliding icebergs, resist downward motion. Instead,
the crust tends to buckle and be pushed upward or sideways forming fold mountains. The collision of
95
India into Asia 50 million years ago caused the Eurasian Plate to crumple up and override the Indian
Plate. After the collision, the slow continuous convergence of the two plates over millions of years
pushed up the Himalayas and the Tibetan Plateau to their present heights. Most of this growth occurred
during the past 10 million years. The Himalayas, towering as high as 8,854 m above sea level, form the
highest continental mountains in the world. Moreover, the neighbouring Tibetan Plateau, at an average
elevation of about 4,600 m, is higher than all the peaks in the Alps except for Mont Blanc and Monte
Rosa, and is well above the summits of most mountains in the United States.
The collision between the Indian and Eurasian plates has pushed up the Himalayas and the Tibetan
Plateau
Cartoon cross sections showing the meeting of these two plates before and after their collision. The
reference points (small squares) show the amount of uplift of an imaginary point in the Earth's crust
during this mountain-building process
7. Conservative Plate Margins
Transform boundaries
The zone between two plates sliding horizontally past one another is called a transform fault boundary,
or often a conservative plate margin. Most transform fault boundaries are found on the ocean floor.
They commonly offset the active spreading ridges, producing zig-zag plate margins at constructive plate
margins. However, a few occur on land, for example the San Andreas fault zone in California. This
transform fault involves the North American Plate and the Pacific Plate sliding past each other.
96
The San Andreas is one of the few transform faults exposed on land.
The San Andreas fault zone, which is about 1,300 km long and in places tens of kilometers wide, slices
through two thirds of the length of California. Along it, the Pacific Plate has been grinding horizontally
past the North American Plate for 10 million years, at an average rate of about 5 cm/yr. Land on the
west side of the fault zone (on the Pacific Plate) is moving in a northwesterly direction relative to the
land on the east side of the fault zone (on the North American Plate). As a result of this slippage, in
another 10 million years Los Angeles is likely to be next to San Francisco! In 60 million years time it will
start sliding into the Aleutian Trench south of Alaska!
Often the plates cannot slide past each other due to frictional forces. Pressure builds up over time,
eventually released as seismic energy (earthquakes) when the fault line finally moves (ruptures) again.
Eartquakes along the San Andreas Fault are almost always shallow in focus.
At conservative plate margins lithosphere is neither being created or destroyed, the plates simply slip
laterally past one another. The only features to form are fault scarps as seen in the photo above. These
occur when one side of the plate moves up or down as well as sideways during an earthquake, causing it to
be higher than the other plate.
8. Hot Spots
The vast majority of earthquakes and volcanic eruptions occur near plate boundaries, but there are some
exceptions. For example, the Hawaiian Islands, which are entirely of volcanic origin, have formed in the
middle of the Pacific Ocean more than 3,200 km from the nearest plate boundary. How do the Hawaiian
Islands and other volcanoes that form in the interior of plates fit into the plate-tectonics picture?
In 1963, J. Tuzo Wilson came up with an ingenious idea that became known as the "hotspot" theory.
Wilson noted that in certain locations around the world, such as Hawaii, volcanism has been active for
very long periods of time. This could only happen, he reasoned, if relatively small, long-lasting, and
exceptionally hot regions, called hotspots, existed below the plates that would provide localized sources
of high heat energy (thermal plumes) to sustain volcanism. Heat from this hotspot produced a persistent
source of magma by partly melting the overriding Pacific Plate. The magma, which is lighter than the
97
surrounding solid rock, then rises through the mantle and crust to erupt onto the seafloor, forming an
active volcano. Over time, countless eruptions cause the volcano mto grow until it finally emerges above
sea level to form an island volcano. Wilson suggested that continuing plate movement eventually carries
the island beyond the hotspot, cutting it off from the magma source, and volcanism ceases. As one island
volcano becomes extinct, another develops over the hotspot, and the cycle is repeated. This process of
volcano growth and death, over many millions of years, has left a long trail of volcanic islands and
seamounts across the Pacific Ocean floor.
In these diagrams the hot spot is located under the young (0.7 million
years) volcanic island of Hawaii, making it volcanically active. As the
overlying oceanic plate has moved over time the line of volcanic islands can
be seen, with the oldest (5.5 million years) island Kauai now volcanically
extinct. Over all this time the anomalous area of high heat flow known as
a hot spot has stayed stationary.
Summary
The uppermost part of the Earth forms a rigid shell known as the lithosphere, underlain by a more
mobile zone known as the asthenosphere. The lithosphere is composed of several plates in relative
motion. Three types of plate boundary exist (constructive, destructive and conservative) and there is a
relationship between seismicity, vulcanicity and plate boundaries. Forces driving plates are of thermal
origin, involving convective motions of the lithosphere and the underlying asthenosphere and lower
mantle. Volcanoes, hot springs and surface heat flow provide evidence for this internal heat. The main
source is radioactive decay. Removal of heat from the mantle by convection and conduction keeps
temperatures below melting point, except at plate boundaries. Some rocks contain a record of the
direction of the Earth's magnetic field at the time of their formation. This palaeomagnetism can be
used to map plate movements through geological time. This remnant magnetism occurs in basaltic rocks
once lava flows have cooled below the curie point. Polar wandering curves for different continents
demonstrate continental drift and ocean floor magnetic anomalies demonstrate sea floor spreading.
98
Natural Hazards
Earthquakes
1. Causes
Earthquakes occur because of a sudden release of stored energy. This energy has built up over long
periods of time as a result of tectonic forces within the earth. Most earthquakes take place along faults
in the upper 25 miles of the earth's surface when one side rapidly moves relative to the other side of
the fault. This sudden motion causes shock waves (seismic waves) to radiate from their point of origin
called the focus and travel through the earth. It is these seismic waves that can produce ground motion
which people call an earthquake.
Each year there are thousands of earthquakes that can be felt by people and over one million that are
strong enough to be recorded by instruments. Strong seismic waves can cause great local damage and
they can travel large distances. But even weaker seismic waves can travel far and can be detected by
sensitive scientific instruments called seismometers which produce a print out known as a seismograph.
99
Seismic Waves
Surface waves (these are known as L waves and are restricted to the earth's surface. Although they
are capable of being transmitted right round the Earth they serve no useful purpose for determining the
internal structure of the Earth, although they do cause the most damage after an earthquake.)
Body waves (these are the P and S waves which can travel deep into the Earth's interior)
P waves
These are compressional or push waves, they
travel through both fluid and solid materials.
They are the fastest waves, travelling
between 4-7 km/s at the Earth's surface.
S waves
These are shear or shake waves, they cannot
travel through fluids like air or water. Fluids
cannot support the side to-side particle
motion that makes S waves. These are slower
than P waves travelling between 2-5 km/s at
the Earth's surface.
Movement
Movement
According to plate tectonic theory huge slabs of rigid lithosphere (plates) 100-300 km thick are in
constant movement, driven by convection currents originating deep within the Earth. Enormous pressure
builds up at the margins of the plates which, when released, causes a sudden jolt or earthquake. This
accounts for the large number of earthquakes that occur at plate margins. The fact that the most
deadly earthquakes occur at destructive and transform plate margins suggests that much greater
pressures build up at these margins than at constructive margins.
i. Destructive Plate Margins
When two plates meet the denser plate is forced underneath the less dense plate along an inclined fault
zone known as a subduction zone. This inclined zone in which shallow focus (0-75 km), intermediate
focus (75-300 km) and deep focus (300-700 km) earthquakes are recorded is known as a Benioff zone.
The earthquakes in this zone appear to be due to different reasons:
earthquakes occur along the entire length of the Benioff Zone as friction builds up as the
descending plate forces its way into the asthenosphere.
100
megathrust earthquakes also occur
along subduction zones when the
overriding plate is buckled down by the
descending plate over hundreds of years
and is eventually released. (See diagram
right).
ii. Conservative Plate Margins
At conservative plate margins two plates move
side by side to form fault zones. The cause of shallow focus earthquakes at conservative plate margins
(San Andreas Fault, California) is:
due to sudden release of pressure which builds up over time at transform (conservative) fault
boundaries due to two plates moving past each other in opposite directions. As plates slide past
each other stress builds up on the fault plane and the fault remains ―locked‖ by friction once the
stress builds up enough for friction to become overcome, the fault slips and the strain is
released causing an earthquake. The longer the strain has been accumulating for, usually the
larger the resulting earthquake once the fault slips!
iii. Constructive Plate Margins
A constructive plate margin is formed when two plates move away from each other forming shallow focus
eathquakes. The causes of shallow focus earthquakes at constructive plate margins (Iceland and along all
MOR‘s) are:
rising magma forcing its way through the lithosphere beneath the mid-oceanic ridge.
the sudden release of pressure which builds up over time at transform fault boundaries due to
the offset nature of the mid-oceanic ridge (remember transform faults result because different
segments of the ridge are spreading at different rates).
movement along step faulting associated with fracturing of the lithosphere as the oceanic crust
is pulled apart and thinned as the lithosphere is under extension (think of stretch Armstrong
analogy).
iv. Hot Spots
Mid-plate earthquakes occur for a number of reasons:
rising magma forcing its way through the lithosphere beneath hot spot volcanoes (Hawaii) causes
fracturing in the crust (tectonic tremors) and the filling of magma chambers (harmonic tremors).
Earthquakes may also occur at intra plate locations such as East Africa
2. Distribution
The distribution of earthquakes are commonly linked to the margins of global plates. The statement that
"earthquakes occur at plate margins " is broadly true as 95% of all earthquakes globally occur at plate
margins. However, it is a gross simplification for it does not account differentiate between earthquakes
being more common and more devastating at some margins than others. The remaining 5 percent are the
result of localised faults occurring within plates, again this statement does not take this into account.
Neither does it consider that some earthquakes occur away from plate margins (Intra Plate) such as
Hawaii due to rising magma and commonly earthquakes occur in East Africa where they are due to
crustal extension of the African plate (where a new ocean is emerging) which causes common normal
faulting and associated ground subsidence. Other earthquakes such as those in the UK e.g. the
101
magnitude 5.2 quake that occurred along the Market Rasen Fault, 27 th February 2008 felt in Blackburn
and Clitheroe can generally be classified as earthquakes caused by reactivation of ancient faults. Man‘s
activities may also generate faults such as filling of reservoirs may load the crust and cause slippage
along existing faults as well as building work which sometime reactivates fault zones as well as
explosions, deep mining / oil extraction and nuclear testing!
The map above shows the global distribution of earthquakes and the major tectonic plates and margins.
It is possible to identify the following features from the map.
Most earthquakes do coincide with the major plate margins.
A number of earthquakes occur away from plate margins - these are often referred to as intraplate or mid-plate earthquakes such as Hawaii, UK and Eastern Africa.
Certain margins have a far greater "density" of earthquakes than others. For example, there
appear to be far more earthquakes along the west coast of South America and in the
Japan/Philippine region than along the Pacific Rise or Mid-Atlantic Ridge.
There is a large density of quakes occurring along the boundary of the Pacific Plate (destructive
margin) this is known as the circum-pacific seismic belt and it accounts for 81% of the worlds
largest earthquakes.
Earthquakes form a narrower spread at some plate margins than at others. Generally speaking,
the earthquakes at destructive plate margins have a greater spread (and occur in wider belts)
and therefore affect more places than those at constructive plate margins (as many constructive
plate margins are submarine), although certain submarine earthquakes can trigger tsunamis which
have far reaching effects (see later notes).
The pattern of earthquakes can be seen to be linear, often narrow bands which are rarely
straight and tend to form arcuate in shapes.
3. Frequency & Scale
Earthquakes are probably the most frequent of all natural hazard events, yet their impact on people,
property and communities varies enormously globally. Earthquakes release enormous amounts of energy
in a relatively short space of time (many less than 30 seconds duration). Their impact is sudden, and they
can occur with very little warning. Every year the Earth shakes something like 500,000 times. Most of
102
these are so tiny that they can only be detected by seismographs. About 100 serious earthquakes
greater than magnitude 6 occur every year, while on average only 1 or 2 major quakes in excess of
magnitude 8 occur over the same period.
Magnitude
Number per year
9.0
Usually 1 per 40 years or more
8.0
7.0 - 7.9
6.0 - 6.9
5.0 - 5.9
4.0 - 4.9
3.0 - 3.9
2.0 - 2.9
<2.0
1 or 2
18
108
800
6,200
49,000
300,000
millions
Example
Sumatra, Indonesia Boxing Day
2004 (9.1). 1964 Prince William
Sound Alaska (9.2), 1960 Chile
(9.5)
San Francisco 1906 (7.8)
Mexico City 1985
Northridge, L.A 1994 (6.7)
Market Rasen UK, 2008 (5.2)
Manchester 2002 (3.9)
Manchester August 2007 (2.5)
The scale of an earthquake is often measured on the Richter scale. This scale was devised by Charles
Richter in 1935 and, although open-ended, rarely extends above 8. Magnitude 9 quakes have rarely been
recorded on a seismographs such as the Sumatra boxing day earthquake 2004 (9.1). The reason why
earthquakes greater than about magnitude 9 do not occur probably lies in the fact that the brittle crust
can accommodate only a finite amount of strain before cracking - an amount that corresponds to a
magnitude 8.9 - 9 earthquake. The scale is logarithmic, meaning that each point on the scale represents a
quake 10 times more powerful than the point below. For example, an earthquake measured as 6 on the
Richter scale would be 10 times more powerful than one measuring 5, and 100 times more powerful than
one of 4. The Richter scale‘s main advantage is that it is objective and allows direct comparison
between earthquakes regardless of building standards and level of economic development in a country at
the time of the event.
The Richter scale measures the magnitude of the seismic waves at the focus of an earthquake. This is
really the ―total energy given off” from an earthquake and can be calculated using the amplitudes of
seismic waves. This gives a numerical figure of the total amount of energy released enabling earthquakes
of all sizes to be compared globally. A disadvantage of the Richter scale is that it does not show how
much damage a particular earthquake is capable of. For example, deep focus earthquakes are not very
effective at generating surface waves, so they tend not to cause much damage. Also, what effect an
earthquake has depends on when and where it occurs. A magnitude 8 quake in the middle of nowhere will
probably cause much less damage than a magnitude 6 quake with its epicentre in a major city, especially
if it occurs at rush hour. The effect of an earthquake also depends on the nature of the rock near the
surface. In the Mexico City quake of 1985 the soft sandy subsoil quivered so vigorously that buildings
sank deep into it, causing many deaths. A similar earthquake below a city built on hard rock would
probably have had much less effect.
Magnitude
1 - 2
Description
Recorded on local seismographs, but generally not felt
3 - 4
5
6
7
Often felt, no damage
Felt widely, slight damage near epicentre
Damage to poorly constructed buildings and other structures within 10's km
"Major" earthquake, causes serious damage up to ~100 km (recent Taiwan, Turkey, Kobe,
Japan, and California earthquakes).
103
8
"Great" earthquake, great destruction, loss of life over several 100 km (1906 San
Francisco).
Rare great earthquake, major damage over a large region over 1000 km (Chile 1960,
Alaska 1964)
9
An alternative to the Richter scale is the Mercalli scale. This ranks earthquakes according to the
intensity of their effects, and can also be used to map the extent to which concentric areas are
affected by a single earthquake this is known as a isoseismal map. The Mercalli scale is really gives a
measure of the damage caused. This scale is based on observations of the effects and damage resulting
from an earthquake, usually the damage is ascertained by the use of public questionnaire. In this sense,
the scale is more subjective than the Richter, but it does have its advantages. Firstly, it does not rely on
the use of seismographs, and secondly the size of the earthquake can be determined even some time
after it happened using historical records, interviews or eyewitness accounts. A major drawback of the
scale, however, is that it only works for inhabited areas. With no buildings to demonstrate the impact of
an earthquake, it is impossible to determine its intensity using the scale.
Mercalli intensity
Description
XII
Damage total.
XI
Few if any masonry structures remain standing.
X
Rails bent.
IX
Damage great in substantial buildings.
VIII
Fall of chimneys. Heavy furniture overturned.
VII
Difficult to stand upright.
VI
Felt by all, many frightened. Damage slight.
V
Windows broken. Unstable objects overturned.
IV
Felt by many indoors, outdoors by few.
III
Feels like passing traffic.
II
Felt only by a few, especially on upper floors.
I
Generally detected by instrument only
4. Effects of Earthquakes
Earthquakes have major impacts both on people and on the physical environment. These effects can be
split up into the following:
i. Physical Environment
Ground shaking/movement causes ground/fault displacement & fault scarps.
Soil liquefaction makes weak ground/sediments act like a liquid & can cause flooding.
Landslides remove vegetation & reduce slope angles.
Tsunami can cause catastrophic flooding.
ii. Built Environment
Loss of communications (roads, elevated highways, railways, bridges, electricity lines).
Loss of homes
Loss of industrial buildings/facilities
iii. Human Environment
104
Death and injury
Destruction of homes causing homelessness.
Loss of factories/industry causing a loss of livelihood and unemployment.
Loss of communications hindering rescue/emergency services and rebuilding/rehabilitation.
The impact of earthquakes varies significantly across the world. This is partly because the events
themselves are unevenly distributed, both in terms of their geographical location and their magnitude,
but also because people and societies have reached different levels of “preparedness”, in terms of
building design and construction, and in their ability to educate people and respond after an earthquake
event. This preparedness of a community before an earthquake, like many aspects of geography, is often
controlled by level of economic development within a country. What follows is a potted history of some
major earthquake disasters from around the world and what caused the damage at each:
Mexico City, 1985 (LDC)
In 1985, an earthquake measuring 8.1 on the Richter scale was recorded in a Pacific oceanic trench off
the west coast of Mexico. Here the Cocos Plate is being subducted beneath the North American Plate.
Considerable damage was done in the coastal states adjacent to the epicentre, but by far the greatest
devastation occurred some 350 km away in Mexico City. After a transmission time of 1 minute, the
earthquake waves arrived in Mexico City. The ground waves were then amplified 4 or 5 times by the
ancient lake sediments on which the city is built. Soft, high-water content sediments like muds, clays and
silts cause an intensification of the vibrations of earthquake waves as they pass through them and make
the rocks appear to behave like a liquid. This is called liquefaction. More than 10,000 people were killed,
approximately 50,000 people were injured and 250,000 were made homeless. Many of the buildings in
the city became tilted and damaged (7,400) or were destroyed (770). Mexico City has a population of
over 20 million people, and is growing at a rate of 2.6% per year. This rapid expansion, alongside the
country's lack of wealth, often mean building design is often inadequate and, although building design
standards might be officially in place, regulations are rarely enforced. This was certainly the case in
Mexico City where several modern high-rise buildings collapsed as concrete crumbled and thin steel
cables tore apart. The 12-storey central hospital collapsed like a pack of cards losing two thirds of its
height as ceilings fell onto the floors below, crushing its inhabitants. It is clearly one of the world's
most hazard-prone cities!
Armenia, 1988 (LDC)
In 1988 a 6.9 magnitude earthquake struck the former Soviet republic of Armenia. It killed 25,000
people, injured 31,000 and made 500,000 homeless. Some 700,000 people lived within a 50 km radius of
the earthquake epicentre. Compare this with the Loma Prieta earthquake in California which was twice as
powerful, affected more than twice the population, but killed only 0.25 % as many people. The building
design is crucial in explaining the differences. In Armenia, the older stone buildings were destroyed by
the ground shaking, and the pattern of destruction was as might be expected with 88% destroyed in
Spitak (5 km from the epicentre) and 38% destroyed in Leninakan (35 km from the epicentre). However,
in Leninakan, 95% of the more modern 9-12 storey pre-cast concrete-frame buildings were destroyed.
Here the buildings were constructed on soft sediments, which caused eight times the ground shaking for
three times longer. The fact that the buildings had no earthquake-proofing features in their design,
combined with the soft foundations, resulted in the high death toll. Poorer countries like Armenia also
tend to be less well prepared for earthquake disasters and their aftermath. Whilst this is due in part to
the lack of money to invest in preparedness materials and education programmes, it is also because
earthquakes are often perceived as infrequent problems in a sociaty facing daily struggles of a much
more mundane and "important" nature.
Unfortunately, history repeated itself in the Russian town of Neftegorsk in May 1995 when 2000 people
died. Here seven 5-storey tenement blocks built in the 1960's collapsed as people slept. Following the
105
Armenian disaster, this second tragedy called into question former Soviet construction standards in
seismically active areas. A professor at a university in the USA has since written: " Clearly, with regard
to earthquake hazards, the system of prefabrication and site assembly of structural components in use
in Armenia was deeply flawed."
Loma Prieta (San Francisco), 1989 (MDC)
The 7.1 magnitude Loma Prieta earthquake was the most costly natural disaster in the USA since the
1906 San Francisco earthquake. The earthquake was caused by a slip along the San Andreas Fault. It was
a shallow focus earthquake, some 18km below the surface. San Francisco is a densely populated region of
the world, over 1.5 million people lived within a 50 km radius of the epicentre. However, the damage was
mostly confined to certain parts of the city. The damage varied according to the surface materials the
buildings were built on as well as the distance from the epicentre. Overall very few people were killed in
this earthquake, but of those that did it was primarily due to the collapse of buildings and structures
such as bridges and elevated highways. For example, 41 of the 67 deaths occurred when the upper tier
of the Nimitz Freeway collapsed. This section of the road was constructed on mud and bay-fill material
used to infill San Francisco Bay. As this weak material was shaken it acted like a liquid (liquefaction) and
caused the freeway supports to collapse. Elsewhere the roads survived the shaking. The Marina District
of San Francisco was also severely damaged during the earthquake. Buildings were distorted, service
pipes snapped and pavements buckled. Four people died, seven buildings collapsed, and sixty-three
buildings were unsafe to enter. This part of the city was built on bay-fill and mud, including some of the
debris dumped in the Bay after the 1906 earthquake! The strong ground shaking associated with the
earthquake was amplified by the artificial landfill deposits and the natural coastal muds in this area,
making them behave like a liquid.
Northridge (Los Angeles), 1994 (MDC)
A powerful 6.6 magnitude earthquake struck in the pre-dawn hours beneath the densely populated L.A
area of Northridge. It caused considerable damage and disruption, however the financial losses and
amount of destroyed infrastructure were much less than similarly sized earthquakes such as Kobe, and
amazingly only 40 people were killed. Los Angeles is a wealthy and well-prepared city. Here building
materials and appropriate design minimise loss of life. Most family homes survived without major
structural damage as they were relatively new and subject to building codes that specified aseismic
design features such as cross bracing and light-weight roofs. Also, in wealthy areas where earthquakes
are common such as California, much is done to prepare for the inevitable earthquake. There are regular
earthquake drills in schools and offices. People are informed about potential dangers and how to respond
when an earthquake happens. The emergency services practice their response procedures. Supplies of
food, water, medicines and shelter are stored in recognised safe areas ready for coping with the
aftermath of an earthquake. Education and preparation are undoubtedly factors in reducing the scale of
a disaster, particularly regarding the response after the event in terms of rescuing injured people and
avoiding the spread of disease.
Kobe, Japan 1995 (Great Hanshin Earthquake) (MDC)
Many of the world's largest and most densely populated cities lie in the heart of "earthquake country".
Massive conurbations like Kobe-Osaka (10 million population) are especially vulnerable, with their densely
packed buildings and raised freeways. The most devastating earthquake to strike Japan since the Tokyo
earthquake of 1923 occurred near the city of Kobe (population 1.5 million) on 17th January 1995. The
earthquake measured 6.9 on the Richter scale, and the epicentre appears to have been located on a
shallow fault zone. This shallow focus, transform (strike-slip) pattern of earthquake was similar to the
Loma Prieta event. The high magnitude and shallow focus would be expected to do serious damage.
Shallow earthquakes occurring close to the surface tend to result in greater intensity of surface
shaking and often cause the greatest loss of life and damage to property. Over 6,000 people were killed,
106
35,000 people were injured and 300,000 (20% of Kobe's population) were made homeless. Nearly
180,000 buildings were damaged, and of these 103,500 were destroyed. These were mostly the older
concrete buildings lacking inbuilt aseismic protection. The worst devastation occurred in traditional style
Japanese wooden houses, designed to withstand heavy rains and typhoons, not earthquakes. The cost of
reconstruction after the Kobe earthquake is thought to be well over $100 billion! Although this is high
countries at a higher level of economic development such as Japan always tend to suffer massive
financial losses as insurance companies and the government fund rebuilding programmes and pay
compensation. However they are often assumed to suffer less in terms of human costs. This was not the
case in this instance. Kobe suffered a huge death toll, far larger than its level of economic development
would have suggested. There were many reasons for this. Most people in Japan prefer to live in houses
built using traditional Japanese methods. This was the housing type that contributed to the greatest
number of deaths. Traditional Japanese housing construction is based on a post-and-beam method with
little lateral resistance. Exacerbating the problem is the practice of using thick mud and heavy tile for
roofing, resulting in a structure with a very heavy roof and little resistance to the horizontal forces of
eathquakes.
Japan was thought to be highly prepared for an earthquake. However, even the best laid plans can fail to
live up to expectations as was the case with the Kobe earthquake when emergency teams reacted slowly
and appeared to be totally overwhelmed by the scale of the disaster. Also complacency bred from the
belief that Japanese seismologists could predict the next "Big One" and the impression given off by
political leaders that they would be ready for it when it came. It is usually stated that poorer countries
suffer most after an earthquake event. However, even rich and economically developed countries have
poorer areas, and it is often these areas that suffer the most. In this instance the poorer areas such as
the Nagata ward was where most of the deaths occurred. Here the poorer members of society could not
afford earthquake resistant buildings. A significant cause of damage in Kobe was the quality of the soils
and bedrock. Due to a severe shortage of available land, much of modern urban Japan, including Kobe, is
built on the worst soil possible for earthquakes. Much of the newer construction in Kobe, particularly
larger buildings, is built on very soft, recent alluvial soil and on recently constructed near-shore islands.
Most of the serious damage to larger commercial and industrial buildings and infrastructure occurred in
areas of soft soils and reclaimed land. The worst industrial damage occurred at or near to the Port of
Kobe due to liquefaction and lateral spreading. Engineers had tried hard to develop methods for
strengthening reclaimed areas to resist failures during earthquakes, but most of these methods were
put into practice without the benefit of being adequately tested in strong earthquakes. Amazingly the
longest suspension bridge in the world was still being constructed at the time, measuring a tremendous
1900m in length the Akashi Kaikyō suspension bridge that connects mainland Honshū to the southern
107
island Shikoku survived the earthquake although it forced the 2 spans apart by a few metres due to the
ground displacement.
The knowledge to significantly improve structures to resist earthquake damage and thereby avoid most
of the deaths and financial losses existed in Kobe. However, what was lacking was a consistent
willingness to marshal or check that building codes and regulations were being followed in every new
building. It is an odd paradox, for time and time again it has been demonstrated that it usually costs less
to prepare for earthquakes in advance than to repair the damage afterwards. Another problem affecting
Kobe was that the city didn't have the yearly drills to test civilian and military responses that Tokyo
does (1st Sept Disaster Day). This resulted in deadly confusion that seemed to overtake every level of
government. Immediately after the quake, Kobe authorities failed to cordon off main roads for official
use, and the delay of police and fire vehicles undoubtedly raised the death toll. For 4 hours the
Governor of Kobe district neglected to make the necessary request for aid to the national armed forces.
The national government could have stepped in sooner to aid with co-ordination. Also there were delays
in accepting international help, such as from the US military based in Japan or foreign medical teams
and sniffer dogs.
Sichuan, China 2008 (LDC)
On the afternoon of May 12, 2008, a 7.9 magnitude earthquake
hit Sichuan Province, a mountainous region in Western China,
killing about 70,000 people and leaving over 18,000 missing. Over
15 million people lived in the affected area, including almost 4
million in the city of Chengdu.
According to a study by the China Earthquake Administration
(CEA), the earthquake occurred along the Longmenshan fault, a
thrust structure along the border of the Indo-Australian Plate
and Eurasian Plate. Seismic activities concentrated on its midfracture (known as Yingxiu-Beichuan fracture). The rupture
lasted close to 120 sec (a long quake!), with the majority of
energy released in the first 80 sec. Starting from Wenchuan, the
rupture propagated at an average speed of 3.1 kilometers per
second 49° toward north east, rupturing a total of about 300 km. Maximum displacement amounted to 9
meters. The focus was deeper than 10 km.
In a United States Geological Survey (USGS) study, preliminary rupture models of the earthquake
indicated displacement of up to 9 meters along a fault approximately 240 km long by 20 km deep. The
earthquake generated deformations of the surface greater than 3 meters and increased the stress (and
probability of occurrence of future events) at the northeastern and southwestern ends of the fault.
After Chinese geologists studied the region they realised that displacement occurred along ancient faults
that reactivated. These faults had remained unrecognized and unmapped in this region, mapping
of these fault lines is a time taking business which will take years to complete, although the results will
be worthwhile as engineers can use these maps to inform where to site high risk public buildings such as
schools.
Since the Tangshan earthquake in 1976, which killed over 240,000 people, China has required that new
structures withstand major quakes. But the collapse of schools, hospitals and factories in several
different areas around Sichuan has raised questions about how rigorously such codes have been
enforced during China's recent, epic building boom.
108
In June 2008, low-lying areas in one of the towns most devastated by the earthquake were flooded as a
torrent of water was released from a dangerous lake formed by landslides, dislodging wrecked homes,
cars and corpses.
The surge of floodwater into the town, Beichuan, was part of an effort by engineers and soldiers to
drain Tangjiashan, one of more than 30 so-called quake lakes that were formed by landslides. For weeks,
the dam of rock and mud holding back the rising waters of the Jian River there had threatened to burst
and flood towns and cities downstream that are home to 1.3 million people.
Another smaller earthquake struck the region in August 2008, damaging 258,000 homes and killing at
least 32 people.
Thousands of the initial quake's victims were children crushed in ‗shoddily‘ built schools, inciting
protests by parents. Local police harassed the protestors and the government criticized them. At least
one human rights advocate who championed their cause was arrested. The human loss was intensified as
the children killed in this region were those born to parents living in rural regions of china who adhered
to the one child policy, therefore masses of only children were lost in the collapsed schools and pubic
buildings.
The Chinese government has refused to release the number of students who died or their names. But
one official report soon after the earthquake estimated that up to 10,000 students died in the collapse
of 7,000 classrooms and dormitory rooms. Reports emerged in July 2008 that local governments in the
province had begun a coordinated campaign to buy the silence of angry parents whose children died
during the earthquake. Most parents whose children died took a payment of about $8,800 from the
local government and a guarantee of a pension in exchange for silence.
In December 2008, government officials acknowledged in the most definitive report since the
earthquake that many school buildings across the country are poorly constructed and that 20 percent of
primary schools in one south-western province may be unsafe.
In February 2009, a growing number of American and Chinese scientists suggested that the calamity
was triggered by a four-year-old reservoir built close to the earthquake's geological fault line. A
Columbia University scientist who studied the quake has said that it may have been triggered by the
weight of 320 million tons of water in the Zipingpu Reservoir less than a mile from a well-known major
fault. His conclusions, presented to the American Geophysical Union in December, coincide with a new
finding by Chinese geophysicists that the dam caused significant seismic changes before the earthquake.
By the first anniversary of the quake, mothers across the region were pregnant or giving birth again,
aided by government medical teams dispensing fertility advice and reversing sterilizations. Because of
China's policy limiting most families to having one child, the students who died were often their parents'
only offspring. Officials say they hope a wave of births will help defuse the anger that many grieving
parents harbour.
But the wounds have festered, in part because the Chinese government, wary of any challenge to its
authoritarian rule, has muffled the parents and quashed public discussion of shoddy school construction.
As the anniversary of the quake again focused attention on Sichuan, the government intensified its
campaign to silence the parents and the media, resorting to harassment by police and threats of
imprisonment. The Sichuan government has explicitly prohibited media organizations from reporting on
miscarriages by women in temporary housing camps. Some quake survivors say they fear that the
miscarriages may have been caused by high levels of formaldehyde in the prefabricated housing.
109
5. Responses to Earthquakes
People respond to earthquakes and the threats they pose to human life and possessions in a way that is
designed to reduce the risks. This response can occur at a range of levels, from the individual and local
communities to national or international level. The response(s) chosen, if any, will depend upon the nature
of the hazard, past experience of hazardous events, economic ability to take action, technological
resources, hazard perceptions of the decision-makers, knowledge of the available options, and the social
and political framework. People and organisations may not adopt all the available strategies, since
resources of time and money are needed for this. The relative importance of the threat from natural
hazards compared with other concerns such as jobs, money, education, for individuals or governments,
will be major factors.
The range of responses available can be divided into 5 broad groups:
Avoidance (very reactive approach)
Modify the event
Modify the vulnerability
Modify the loss
Acceptance (very fatalistic approach)
The first approach is not always possible and the last is not always advisable! It is the various ways of
modifying the event, the vulnerability and the loss that are the most often used responses
i. Modify the Event
Controlling the physical variables - The physical control of an earthquake event itself is very
unlikely for the foreseeable future, and is not a realistic form of management! Therefore, we
must look into other forms of management. It was attempted by the US military disposing of
toxic waste along the San Andreas fault in the 1960‘s, but subsequent attempts have proved
unsuccessful and future projects near urban areas would be unwise as they could potentially
trigger dangerous quakes.
Hazard-resistant design - The collapse of buildings and structures is responsible for the
majority of deaths, injuries and economic losses resulting from an earthquake. Thus the impact
of the hazard can be reduced by incorporating earthquake-resistant (aseismic) design features.
Buildings made of mud-brick (adobe) or other materials without any reinforcement collapse
easily during an earthquake. In multi-storey buildings or buildings of complex shapes the shaking
can be increased with height as the building moves, and the buildings may twist as well as shake.
Each earthquake event provides engineers with lessons in how buildings perform, so techniques
and regulations are constantly being updated and improved. However, this does mean that older
buildings may not be as safe as once thought. In the Kobe (1995) and Northridge, California
(1994), earthquakes, buildings constructed in the 1980's and 1990's performed much better
than those built before this. Currently there are a number of aseismic designs:
Cross-bracing steel framed buildings - by adding eccentric cross-bracing to a structure, the
building is more "ductile" and able to twist to respond to pressures imposed by earthquakes,
minimising the damage caused to the structure. counter weights - a large concrete weight on the
top of the building activated by computer-controlled dampers, moves in the opposite direction to
the force of the earthquake to counteract stress on the structure. Total power failure means
the block cannot be moved!
110
Rubber shock absorbing foundations - large rubber shock absorbers planted into the
foundations allow the building to rock back and forth and up and down, without too much damage
to the structure.
Base isolators - another engineering strategy for reducing earthquake damage is to partially
isolate a building from ground shaking in an earthquake. This strategy, called base isolation, is
increasingly used to safeguard important structures. Displacement of base isolators prevents
large displacements of floors of the building above. Building response recorded by the California
Geological Survey in the 1994 Northridge earthquake (magnitude 6.7) confirmed the promise of
the base-isolation strategy. The 8-storey steel superstructure of the University of Southern
California (USC) University Hospital in Los Angeles is supported by 149 isolators sitting on
continuous concrete footings. During the Northridge earthquake, motions recorded at the top of
the isolators and at the roof were less than those recorded in the ground below the isolators and
at a nearby site removed from the building. The isolators reduced the level of motion fed into
the base of the building by about two-thirds. The peak shaking at roof level was only about 40%
of that recorded on the ground about 200 feet from the building, whereas with a conventional
foundation the roof-level shaking would have exceeded that measured on the ground. The USC
University Hospital was built with base isolators to allow it to withstand strong earthquake
shaking. The success of this design strategy was demonstrated in the 1994 Northridge quake,
when the hospital and its contents suffered no damage, despite the servere ground shaking
produced by the quake.
Deep foundations - deep foundations secure buildings to the ground and shake with it rather
than away from it. Steel pillars can be driven deeply into the ground and are particularly
effective in domestic dwellings in earthquake prone areas.
Full metal jackets - these are used to encase concrete pillars to stop the cracked concrete from
falling away during an earthquake, giving the structure continued strength and preventing
collapse. Many bridges and elevated freeways have been retrofitted with full metal jackets since
they were first built.
Wattle and daub and timber framed buildings - low-cost housing using cheap local materials
have been developed that do not require dangerous materials such as breeze blocks, corrugated
iron and concrete lintels which cause death and injury in an earthquake.
Single storey buildings these are less likely to collapse than taller buildings.
Stepped/triangular profile - again these are less likely to fall down due to there strength and
lower centre of gravity.
Expensive technological "fix" best used on public buildings and utilities. Most of the recently built Kobe
buildings of aseismic design survived. However out of the 269 high-rise commercial buildings in Kobe, 62
were demolished. Supposedly earthquake-proof structures like the Hanshin expressway collapsed over
long sections. Poorer members of society, such as in Nagata ward where most of the deaths occurred,
could not afford earthquake-resistant buildings. Due to expense it is not possible to rebuild the whole
city with aseismic design. The government has paid for the rebuilding of the public infrastructure, but
has given no money for individuals to rebuild. Also an aseismic design means that the building is less
likely to collapse on its occupants - it does not mean that the structure is not damaged beyond repair.
ii. Modify the Vulnerability
Earthquake Forecasting - Achieving the ability to forecast earthquakes accurately and
repeatedly is currently the Holy Grail of geophysics, guaranteeing a Nobel Prize to the
successful discoverer. But just what exactly does forecasting mean? First of all it might be
useful to distinguish between the terms 'forecast' and 'prediction', both of which tend to be
used interchangeably to describe the efforts of seismologists to see into the future. Given the
111
frequency with which so many of us have come home cold and drenched at the end of a day that
the smiling television weather-forecaster has assured us will be one of warmth and brilliant
sunshine, it should surprise no one that the term forecast is a relatively imprecise statement. In
contrast prediction relates to something altogether more precise and accurate, which covers a
shorter time period than a forecast. On the basis of these definitions we can already forecast
earthquakes, in a manner of speaking, but it remains contestable whether anyone has yet been
successful in predicting an earthquake. Chinese scientists do, however, claim to have predicted a
quake in Haicheng in 1975, about 5 hours before the event. There are a number of methods of
forecasting earthquakes events:
Return Times - Earthquake forecasting is all about looking at the past record of earth-quakes
for a particular region or for a specific fault in order to reveal some sort of regular pattern of
events that can provide a clue to the size, timing and location of the next quake. For example, if
a magnitude 6 quake had occurred on a particular fault in 1800, 1850, 1900 and 1950, it would be
reasonable to forecast that the next earthquake in the sequence would take place in the first
year of the new millennium. Unfortunately, things are never quite this simple. Although
earthquakes occurring on a specific fault do have regular return times, they do not appear quite
like clockwork. A more realistic sequence for our hypothetical quake might be 1800, 1847, 1911
and 1950. This gives an average return time of 50 years, but the actual gaps between the
different quakes range from 39 to 64 years. On the basis of this information alone, a
seismologist making a fore-cast in 1950 would have to say that the next event in the sequence
would be likely to occur sometime between 1989 and 2014 - not a particularly useful piece of
information for the inhabitants and disaster managers of the threatened region. Furthermore,
even this forecast might be of limited use. because four dates over 200 years might not
accurately describe the true pattern of earthquakes, which could be determined only if the
record went back much further. For example, if the quake prior to 1800 had taken place in 1620,
the whole forecasting exercise becomes virtually meaningless. The lesson then is that, the longer
the record, the more accurate a forecast based upon it is likely to be. For the Los Angeles area
of California such a record is available from the dating of earthquake-related deformation over
the past 1,500 years or so. This reveals that, since AD 565, there have been eight major
earthquakes, spaced at intervals ranging from 55 to 275 years, and with an average return time
of 160 years. The last time the Earth moved in a big way was in 1857, so LA has a reasonable
chance of being faced with the next "big one" sometime in the first few decades of the next
century. Then again, it might not happen until 2132.
The reason why earthquakes on a specific fault occur reasonably regularly is that their purpose
is to release strain that is accumulating in the rocks. If the strain is increasing at a constant
rate - and there is a certain threshold, determined by the material properties of the rock and
the nature of the fault and its surroundings, above which the strain is released by movement on
the fault - then some periodicity in the earthquake record should not be surprising. The strain
build-up on many faults relates directly to the rate of movement of the lithosphenc plates, and
most earthquakes occur on faults that coincide with plate boundaries that are moving past one
another at several centimetres a year. If the annual rate of movement is known and constant,
which it is for the Earth's plates, and the amount of fault movement that occurs during each
earthquake is also known, then the frequency of quakes can beworked out. If the lithosphere on
either side of a fault is moving at 5 cm/year, and the fault jumps 5m during every quake, then
the strain accumulating in the rock must be released every 100 years. So the fault is trying to
move continually at 5 cm/year, but frictional forces between the rock masses on either side
prevent this. After a century the accumulated strain is great enough to overcome the friction,
and the fault jumps 5m to make up for lost time and reduce the strain across it to zero. Then
the cycle starts all over again. Japan has a major earthquake approximately every 10 years,
whilst the Parkfield area of the San Andreas Fault has a return time of between 20-30 years.
This is based on a series of repeating earthquakes on this stretch of the fault. The previous
112
events were in 1857, 1881, 1901, 1922, 1934, and 1966. The next earthquake in this sequence was
predicted to occur by 1993. However, the anticipated Parkfield earthquake did not occur until
September 28, 2004, when a magnitude 6.0 earthquake occurred on the San Andreas fault. It
ruptured roughly the same segment of the fault that broke in 1966 becoming the seventh quake
in the sequence.
Seismic Gaps - A clue to the imminence of a larger than normal earthquake can be gained from
studying this sort of pattern and looking for gaps. If, for example, a quake forecast to occur
from strain and fault displacement data, such as that just described, fails to appear, then it is
time to start worrying. Sometimes a fault becomes 'locked' for one reason or another, and the
strain is allowed to build up over a much longer period. This means that, when the fault does
eventually move, a greater displacement will be needed in order to take the strain back down to
zero. This greater movement will mean a bigger and almost certainly more destructive quake.
Locations where this behaviour is observed are known as seismic gaps, and they are often the
sites of the biggest and nastiest quakes. The 1964 Alaska earthquake filled one of these
seismic gaps, but worrying gaps still occur around Los Angeles, in Papua New Guinea, around much
of the Caribbean and in Japan.
Stress Transfer - Making any sort of earthquake forecast is further complicated by the fact
that the faults on which quakes occur often do not exist in isolation. Typically they are
connected to other faults that have their own characteristic rates of strain accumulation and
earthquake records. Because of these complex links, a quake that releases strain on one fault
may actually transfer that strain to a neighbouring fault, bringing forward the time when that
also snaps and thus produces an earthquake. The importance of this stress-transfer phenomenon
to seismology is becoming increasingly recognized, because a relatively small movement on a
neighbouring fault may trigger a much larger displacement, and therefore earthquake, on a major
fault close by. In the Tokyo area, for example, studies of past earthquake records have revealed
that some of the biggest quakes to hit the city have been earlier by a smaller quake on a fault to
the south-west, near the city of Odawara.
Similarly, in California, seismologists are concerned that small displacements along associated
minor faults might be just enough to trigger movement along a locked segment of the great San
Andreas Fault, with devastating consequences. If the strain that has accumulated in a fault is at
a critical level, only a small degree of added stress is needed to make the fault snap and trigger
a quake. Even another earthquake thousands of kilometres away can do this, and the magnitude
7.3 earthquake that occurred near Landers, in the Mojave Desert east of Los Angeles, caused
smaller quakes throughout California and as far away as Yellowstone in Wyoming, over 1000km
distant. When a fault is poised to shift, even the Moon can have a role to play. As the Moon's
gravity pulls the oceans to produce the tides, it also pulls at the solid rock that makes up the
lithosphere. These Earth tides set up stresses within the rock that can be sufficient not only to
trigger earthquakes but also to set off volcanoes that are in a critical state.
How useful are earthquake forecasts?
Earthquake forecasts, then, have a useful role to play in providing a general guide to roughly
when the next quake might occur. For disaster managers and the emergency services, however,
they are not particularly useful. Such people need something much more precise that will tell
them, with sufficient warning, when and where the next earthquake will strike and, ideally, how
big it is going to be. As mentioned earlier, the Chinese claim to have successfully provided such a
pre diction before an earthquake close to the city of Haicheng in 1975. Following several months
of tilting of the land surface, water gushing from the ground and strange animal behaviour, over
90,000 citizens were evacuated from the city on 4 February. The next day, just over 12 hours
later, a magnitude 7 quake hit the city, destroying over 90 per cent of the buildings but taking
113
virtually no lives because of the timely exodus. Was this a true prediction, however, or did the
scientists and authorities get the timing of the evacuation right more by luck than judgement?
In fact, the latter is probably true. Other factors, such as ground-surface movements, water
bursts and unusual animal behaviour, may all indicate that an earthquake is on its way, but it has
yet to be demonstrated that such events can provide information on the precise timing of a
quake. It seems that, in Haicheng, the scientists correctly surmised that a quake was on its way,
but were just lucky in terms of telling the civil authorities when to evacuate. Certainly the
method of prediction was not transferable to other earthquake-prone areas, and the Chinese
failed utterly to warn of the devastating Tangshan quake (1979) that occurred only 4 years
later. Not to be outdone by their Chinese comrades, Soviet scientists made similar earthquake
predictions for the Kuril Islands between Japan and the Kamchatka Peninsula, only to have to
wait 8 years before the Earth shook!
Prediction and warning - In countries of the West, little attention was focused on try ing to
accurately predict earthquakes until the 1970s, when increasing worry over the imminence of the
next "big one" in California led the USGS to devote more time and money to monitoring the San
Andreas Fault System. Much of the work concentrated on using advancedinstrumentation to
measure the accumulation of strain along the various parts of the fault and to look for anomalous
movements that might warn that an earthquake was on its way. Like volcanologists, seismologists
have at their disposal an impressive array of instruments to study faults and their activity, and
in fact, many are the same as those utilized to measure volcano deformation. Strain-meters are
used to monitor stress changes in the rock around the fault, while small displacements across
the fault can be detected using a number of different methods. These include laser and
infrared electronic distance-meters, which bounce beams off reflectors located on the far side
of the fault and can detect displacements as small as a millimetre, and creep-meters, which
provide a continuous record of the tiny movements along the fault. Just as tilt-meters and
levelling can be used to monitor swelling of a volcano, so they can be used to look for the groundsurface deformation that may warn of a forthcoming earthquake.
Ground Uplift and Tilting - Measurements taken in the vicinity of active faults sometimes show
that prior to an earthquake the ground is uplifted or tilts due to the swelling of rocks caused by
strain building on the fault. This may lead to the formation of numerous small cracks (called
microcracks). This cracking in the rocks may lead to small earthquakes called foreshocks.
Fore-shocks - Many seismologists look to their own specialist instruments, seismographs, to
warn of a future quake, believing that a major earthquake is often preceded by smaller quakes,
known as fore-shocks. On a broad scale, a number of moderate quakes in a particular region may
precede a major one, probably related to the stress-transfer phenomenon, and this appears to
have been the case prior to the great San Francisco quake of 1906 that razed the city to the
ground. Prior to a 1975 earthquake in China, the observation of numerous foreshocks led to
successful prediction of an earthquake and evacuation of the city of the Haicheng. The
magnitude 7.3 earthquake that occurred, destroyed half of the city of about 100 million
inhabitants, but resulted in only a few hundred deaths because of the successful evacuation. On
a smaller scale, tiny quakes, sometimes known as tremors, may occur in the vicinity of a fault a
few days to a few hours before it snaps. Unfortunately, they are not always seen before a big
quake, so they cannot be relied upon. Furthermore, they may also occur in the absence of a
larger, following earthquake, making them even less reliable predictors. There were 4 foreshocks
before the 1995 Kobe earthquake but these were only recognised after the earthquake!
Water Level in Wells - As rocks become strained in the vicinity of a fault, changes in pressure
of the groundwater (water existing in the pore spaces and fractures in rocks) occur. This may
114
force the groundwater to move to higher or lower elevations, causing changes in the water levels
in wells.
Radon Gas - Some days before a major earthquake, the enormous strains that have
accumulated along a fault can lead to cracking of the rock deep down. Because water quickly
enters these cracks, the level of the water table may change prior to a quake, and this can be
measured by monitoring the water levels in specially dug wells that penetrate into the top of the
water table. The same cracks may also provide a passage to the surface for the naturally
occurring radioactive gas radon, which is common in ground water. Special radon detectors,
located in wells, may therefore be able to warn of a forthcoming earthquake by spotting an
increase in the amount of radon gas coming from the well water. Detailed studies of water-table
changes in China and elsewhere have revealed that, generally speaking, the larger the area
affected and the longer the period before an earthquake that the changes are observed, the
bigger the quake will be. Radon gas levels changed by over 10% before the 1995 Kobe
earthquake.
Electrical Currents - In the last couple of decades there has been much debate about whether
or not the stress variations that occur around a fault before a large earthquake can cause
changes in the electrical or magnetic properties of the crust in its vicinity. Natural electrical
currents, known as Earth currents are constantly flowing through the crust, and some scientists
claim to have detected small changes in the strength of these currents prior to a large
earthquake. An increase in electromagnetism was recorded 11 days before the 1995 Kobe
earthquake.
Electrical Resistivity - Similarly, the natural resistance of the crust to electrical currents has
also been observed to change before a major quake. This property, known as the electrical
resistance of the crust, can be monitored using an electrical resistivity meter, which shoots
pulses of electrical energy into the crust and measures the resistance in the rock to their
passage. Scientists working in various earthquake-prone parts of the world have reported
detecting noticeable falls in electrical resistivity around faults prior to large quakes. This drop in
electrical resistivity is due to an increase in groundwater (in itself due to an increase in cracks in
the crust allowing graoundwater to rise up through the crust) which easily conducts electrical
currents. So the more water in the rocks the quicker the pulses of electrical energy shot by the
electrical resistivity meter can travel. The less water the slower the electrical pulses can travel,
giving the rocks more resistance to electrical currents. Although, once again, the changes remain
too unreliable to be used as dependable predictors.
Animal Behaviour - Enough then of high-tech methods of predicting earthquakes, what about
getting back to basics? Is there a cheaper and simpler way of finding out if a major quake is on
its way? It seems that in unusual animal behaviour there might be. The problem is that
monitoring animal behaviour to predict an earthquake is extremely subjective and depends as
much on what the observer defines as 'unusual' as it does on the behaviour of the animals
themselves. There are, nevertheless, numerous accounts from all over the world of anomalous
animal behaviour prior to significant earthquakes. Japanese fishermen reported bigger catches
just before the 1995 Kobe earthquake, the idea presumably being that, given a choice, the
fishermen's nets seem preferable to the impact of an undersea quake. Japanese catfish are also
said to become more active prior to an earthquake, leaping out of the water in an excitable state.
Before the Haicheng quake all sorts of animals are reported to have gone wild, with pigs
becoming unusually aggressive and birds flying into trees. Prior to a magnitude 7.4 earthquake in
Tanjin, China, zookeepers reported unusual animal behavior. Snakes refusing to go into their
holes, swans refusing to go near water, pandas screaming, etc. This was the first systematic
study of this phenomenon prior to an earthquake.
115
Notwithstanding this, however, the body of evidence does support the notion that some
organisms can detect phenomena that warn of an imminent quake. This is a critical discovery
because, if animals are able to do this, then we should be able to build instruments to do the
same thing. Although much conjecture is involved, some scientists have suggested that some
animals may be able to detect ultrasound or electro-magnetic radiation emitted from the crust
just before a fault moves. Others have suggested that electrostatic charges may be generated
that give furry and feathery beasts repeated electric shocks - certainly enough to irritate an
already aggressive pig! The answer remains, however, that we simply do not know what causes
animals to react so strangely before a large earthquake. Perhaps, once we do, the Earth will
become a much safer place.
The dangers of predictions
It is sometimes worth pondering what would happen if we could accurately predict a major
earthquake many months, or even years, ahead. Imagine that, in 50 years time, seismologists
have developed a technique that allows them to pinpoint the timing and size of an earthquake 2
years ahead with an accuracy of 2-3 weeks and that, in early 2048, the USGS issues a warning
that a magnitude 8.1 quake will strike Los Angeles in November 2050. The immediate effect
would be a plummet in property prices and the collapse of the real-estate business, as buyers for
property in the region evaporated. Insurance companies would pull out just as quickly, leaving
those with unwanted properties on their hands without cover. Major companies would make plans
to move elsewhere, offloading their workforces and creating a huge unemployment problem. It is
not hard to believe that, even before the earthquake strikes, its prediction might have caused
almost as much, if not more damage to the economy of the region. Perhaps then it would be
better if the Holy Grail of successful earthquake prediction - like the real thing - remained
always just out of reach.
Community preparedness - Experience of how people behave in earthquakes has helped to devise
recommendations on the most appropriate action. There are numerous ways of preparing the
community for an earthquake event:
General public awareness is very important. The key response by the public is to have
emergency supplies in stock, move under protective furniture during the earthquake and then
await rescue. School children in California are taught "Duck & Cover" drills from a very early age.
First-aid training is invaluable, as there is likely to be a delay of hours or even days before
outside help arrive. In Japan all school children put through emergency earthquake and fire drills
4 times a year and told what to do during an eathquake. Earthquake kits, sold in stores,
containing
buckets,
food,
water,
first-aid
kit,
torch
and
headgear.
Disaster Prevention Day on Sept 1st each year in Japan when throughout the country local
communities and businesses hold drills. People's reaction during the Kobe earthquake showed
there were flaws in the preparedness system. Many people were seen running outside buildings
and were hit by falling debris or wandering aimlessly through the streets. Kobe's citizens
believed that there was little earthquake risk in the area, although large businesses had avoided
siting expensive plants in the area.
Tokyo Gas Company has a seismic network which transmits info informing the company about
pipeline damage so that gas can be switched off. Also individual houses in Tokyo have "smart
meters" which cut off the gas if an earthquake over magnitude 5 occurs. Infortunately these
developments are not widespread yet. In 1990 the Japanese government passed a resolution to
transfer some of Tokyo's political and administrative functions to the north of Honshu.
116
Emergency sevice planning and organisation to develop strategies on where to deploy people and
equipment in an area which may have suffered communication disruption. Computer developments
help make emergency responses more effective.
Government preparedness and responsibility for decision-making. It was the response of the
local and national government which caused the most concern. The official response was slow - 5
hour delay before calling in the army and then only 200 troops were mobilised; took 4 days for
30,000 troops to be used to help the rescue; government officials debated for several days
before deciding to designate the area a "disaster zone" so that it could receive emergency
relief; 3 days after the quake the city still had no electricity. Devastation cut off most
transport routes out of the city and caused huge traffic jams on the remaining roads as people
tried to escape exposed the serious flaws in the Japanese management structure. What seems
to be needed in Japan now is a more detailed and open assessment of the risks to urban areas
and the ability of the emergency services to react.
Land-use planning - Hazard maps produced using GIS. Enables most hazardous areas to be
indentified (targetted for immediate emergency action) and regulated (identify areas to use
aseismic building designs or stop development altogether). Plan on local level- public open space
and major city services kept apart. Planning on national level to transfer political and
administrative functions to other areas which are less seismically active.
iii. Modify the Loss
Aid - Disaster aid helps distribute financial losses on an international scale.
Insurance - Aid does not occur over the long-term to help rebuild lives and property. Most
international aid has been to help in the few days after the event to provide medical services and
other relief goods. In Japan there were delays in accepting international help ,such as from the
US military based in Japan or foreign medical teams and sniffer dogs, beleiving they can cope
themselves. Insurance is only really available in MEDC's. However even in Japan the vast
majority of people at risk from earthquakes have no realistic access to insurance due to cost and
their own risk perception. In Kobe it was mainly commercial and industrial property which was
insured.
117
Tsunamis
Causes & Characteristics:
A tsunami (soo-NAH-mee) is a series of waves of
extremely long wave length generated in a body of
water by an impulsive disturbance that displaces
the water due to a volume change in the ocean
basin. Tsunamis are primarily associated with
earthquakes in oceanic and coastal regions.
Landslides, volcanic eruptions, nuclear explosions,
and even impacts of objects from outer space
(such as meteorites, asteroids, and comets) can
also generate tsunamis. Earthquakes generate
tsunamis when the sea floor abruptly deforms and
displaces the overlying water from its equilibrium
position. Waves are formed as the displaced water mass, which acts under the influence of gravity,
attempts to regain its equilibrium. The main factor which determines the initial size of a tsunami is the
amount of vertical sea floor deformation. As the tsunami crosses the deep ocean, its length from crest
to crest may be a hundred miles or more, and its height from crest to trough will only be a few feet or
less. They cannot be felt aboard ships in the open ocean. In the deepest oceans, the waves will reach
speeds exceeding 600 miles per hour (970 km/hr). When the tsunami enters the shallow water of
coastlines, the velocity of its waves diminishes and the wave height increases. It is in these shallow
waters that a large tsunami can crest to heights exceeding 100 feet (30 m) and strike with devastating
force.
Boxing Day Indonesian Tsunami, December 2004.
Impacts:
The tsunami was triggered by a massive mega thrust earthquake, generated because of the stress build
up caused by the Eurasian plate and Indo-Australian plate subducting beneath the Indian Ocean near the
west coast of the island Sumatra. The enormous magnitude of the earthquake (9.2) and the speed and
height of the wave generated meant that the number of countries that experienced damage and
casualties was astonishing. Fatalities were reported on both sides of the Indian Ocean: Indonesia, Sri
Lanka, India, Thailand, the Maldives, Somalia, Myanmar and Malaysia the Seychelles Kenya and South
Africa. There was no warning in system in place in the heavily populated and quite often remote coastal
regions around the Indian Ocean and the populations were unaware of the approaching killer wave.
Impacts on Sri Lanka:
Sri Lanka is an interesting case study while investigating the impacts of a tsunami as the Boxing Day
wave was earthquake triggered and most areas near the epicentre (Indonesia) suffered damage from
both the quake and the wave, however Sri Lanka provides an example of a country where we can
118
exclusively study the effects of the tsunami (as there was no impact from the earthquake) Sri Lanka is a
small island nation, 440km in length and 220 Km in width with a land area of 65610km 2 smaller than
Tasmania. Sri Lanka although 1600km from the epicentre suffered tremendous force from the wave as
it swept 5km inland.
Therefore the wave essentially hit a long (1000km) but thin section of the
coastline.
It took the wave 2hrs to travel to the island from the epicentre and even though much of the developed
world was aware of the disaster unfolding in Indonesia, no warning was given to Sri Lanka (although
earthquake waves were recorded in Chennai). This was mainly because it is such a geographically isolated
country with the majority of the population living in inaccessible coastal regions.
The wave hit at 10:20 local time. The length of one wavelength of the tsunami was around 500km and
corresponded perfectly with perpendicular obstacle size of Sri Lanka <400km, meaning the wave could
surround the majority of the Island by diffraction. The wave approached from the SE and you would
expect therefore, the damage to be less in the lee of the wave, however because of the size of the
Island the wave wrapped almost around the full island (1/8 remained unaffected at first from Chilwa to
Poonerya on the NW coast) and left a shadow zone on the lee side. In this shadow zone in the Lee of
the wave, after passing the island the waves turned towards the region in the shadow. It is likely that
the earthquake triggered a rupture along a 1600km length which caused more than one point of slippage
and therefore numerous earthquake foci. As a result waves were able to interact by either constructive
or destructive interference, causing greater or smaller amounts of energy respectively.
50, 000 killed Sri Lanka (out of a 20 million population)
300, 000 in the total region
2.5 million people displaced
140, 000 Houses and buildings destroyed in coastal
towns
24, 000 boats destroyed (70% of fishing fleet)
11, 000 businesses destroyed
Coastal
infrastructure
(roads,
railway,
power,
telecommunication, water supply, fishing ports) was also
significantly affected. Fortunately, the port of
Colombo, and the industrial belt in the western province
sustained only light damage.
1700 passengers on a train washed away
Ocean churned and receded
$1.5 billion of damage – from mainly housing, fisheries,
tourism and transportation
Despite the scale of the tragedy, none of the key
economic infrastructure was damaged by the tsunami
and
rebuilding
the
destroyed
harbours
and
infrastructure will all present a boom to the
construction industry.
The impact on the national
economy is therefore is less than 0.5% of GDP!
The tide of aid that washed into Sri Lanka‘s shores has
had a positive effect on the local economy.
Responses
As Sri Lanka is a LDC the local government was ill prepared to manage a disaster of this scale. It takes
a strong government and a highly organised country to deliver appropriate assistance to its population in
the aftermath of such an event. Therefore, in Sri Lanka the majority of the response came in the form
of aid, mainly from developed countries in Europe and N. America. Neighbouring countries also offered
assistance in terms of humanitarian aid and food / medicines.
Although the organisation of the
distribution of the aid by the aid agencies has been heavily criticised for not taking into account its
impact on the local economy and the culture of local people it is also highlighted that without
intervention by the international community Sri Lanka would have been overwhelmed by the scale of the
disaster, especially as it was just coming out of a twenty year long civil war between the Tamil tigers and
the Sri Lankan government.
A selection of responses:
Oxfam and Save the Children and Care.
By the end of the first month 300 new international
needs or whether they were duplicating the work of
other aid agencies.
non-government organizations (INGO‘s) operating in Sri
Money became available so many aid agencies descended
Lanka. (only about half of this had registered their
on the area and competed for the funds causing rapid
presence)
Aid Agencies distributed goods, services and funds in
‗congestion‘.
Every Thursday during the first few weeks following
affected area without any comprehension of their
the tsunami, the director of the CNO conducted a
impact on local dynamics, how they related to actual
briefing session to update on progress and key
developments in the emergency relief.
119
50 000 metric tonnes of rice offered by government of
Construction industry has also been stimulated by its
Taiwan and accepted even though there was no food
rapid expansion and demand for reconstruction.
shortage within
price of carpenter rose from 500 rupees (£2.50) to
the
tsunami
affected population.
Farmers now can‘t sell rice as people can get it for free
E.g.
1500 Rupees (£7.50)
so farmers can‘t pay off loans and will not accumulate
The international public were over-sensitised to forms
enough capital for the next season.
of misery and suffering such as this by their exposure
Aid can be a business in itself and many companies
to Band Aid appeal for Ethiopian famine, this prevented
competed in ‗bidding wars‘ to provide the aid as there
them from donating as much as they would have before
was international monetary value attached to it.
INGO (international government organisations) hired
the 1980‘s appeal.
Many local aid agencies who once only managed small
organized crime gangs to terrorize citizens into moving
budgets now found themselves managing 4 times the
into their housing when their aid was refused. But
amount and often did not know how to utilise such
eventually their coercive and divisive tactics where
funds.
The
rejected and the community once again embraces the
Although the aid effort was not coordinated, in times
support of their original aid donor.
Some people were refused government compensation to
of extreme circumstances such as a Tsunami‘s it takes a
strong government to manage the relief efforts and
rebuild their houses as they lived outside the ‗coastal
the help from non-government agencies made the
re-construction exclusion zone‘.
response better than if the government had to deal
The economy has been stimulated by the inflow of
with it all on their own.
foreign currency and international debt relief.
120
Volcanoes
1. What Causes Volcanoes?
There is a considerable link between plate boundaries and volcanic activity. According to plate tectonic
theory huge slabs of rigid lithosphere (plates) 100-300 km thick are in constant movement, driven by
convection currents originating deep within the Earth. Volcanic activity occurs wherever plates are being
pulled apart or pushed together, both circumstances cause magma to be generated.
Major Forms of Volcanism:
1. Constructive Plate Margins
Most of the magma which reaches the earth's surface (73%) occurs along these boundaries. At
constructive (divergent) plate margins the lithosphere is being pulled apart due to diverging convection
cells in the underlying mantle, a process known as sea-floor spreading. As the lithosphere is pulled
apart it thins and forms rift valleys. The mobile asthenosphere rises up to fill the space left behind by
the thinning lithosphere and consequently is under less pressure (as it is now nearer the surface). This
drop in pressure is enough to allow some melting of the asthenosphere, a process known as
decompression melting. When mantle material in the asthenosphere (peridotite <45% SiO2) begins to
melt, the liquid "sweated" out from it has a slightly higher silica content as the asthenosphere is only
partially melted. Magma produced in this way reaches the surface through weakness (vents) and thin
gaps (fissures) and solidifies to form basaltic oceanic crust (52-45% SiO2), the composition of which
is, on average, about 50% silica. The melting of a large volume of rock to yield a smaller volume of melt
enriched in silica is known as partial melting. The whole of the oceanic crust has been produced by
partial melting of the asthenosphere.
1. This is where the convection currents are
pulling the oceanic lithosphere apart.
2. The lithosphere thins as it stretches.
3. Mobile asthenosphere material rises to fill
the space left behind by the thinning
lithosphere. As it is nearer to the surface it is
under less pressure and consequently starts to
melt.
A process known as
decompression
melting. As it is asthenosphere material that is
being partially melted the magma formed is
basaltic in composition. The newly generated
basaltic magma reaches the surface via
weaknesses in the lithosphere, such as fissures.
NB. This type of margin produces fissure eruptions, Shield Volcanoes and Fire Fountains. Please see
tectonic notes section on volcano types.
2. Destructive Plate Margins
Some 80% of the world's active volcanoes occur along destructive boundaries. At destructive
(convergent) plate margins lithospheric plates collide. As the plates collide, part of one plate may be
driven beneath another, a process known as subduction. The descending plate drags down with it crustal
material and forms an ocean trench. Because oceanic crust is denser (3.0g/cm³) than continental crust
(2.7g/cm³), when two plates meet at a subduction zone it is almost invariably the edge of the plate
carrying the oceanic crust that goes under.
121
As the oceanic plate descends into the asthenopshere the first change to occur is that seawater that
had been trapped within the wet rocks and sediments of the descending plate is ―driven off‖ and the
volatiles are liberated. This begins at a depth of about 50 km and is virtually complete by the time the
descending plate has reached about 200 km in depth. The escaping water passes upward into the
asthenosphere above the descending plate, where it helps in the generation of magma. This is because
although dry mantle at this depth and temperature would be completely solid, adding water induces
partial melting. This process is known as hydration melting, and is a second important way in which
magma can be formed within the Earth without the need for the rocks to be heated.
The next thing to happen to a subducting plate is that its crustal part starts to partially melt (due to
water lowering its melting point) as it subducts to greater and greater depths. This partially melted
oceanic crust provides another source of magma. the magma from all three sources rises upwards due to
it being hot and less dense than the surrounding rocks, and concentrates in a belt about 70 km above the
descending plate.
The composition of the magma produced at destructive plate margins varies according to whether it is
generated in the crust or in the mantle (asthenosphere):
Partial melting of the asthenosphere (peridotite compostion <45% SiO2) situated above the
descending oceanic plate will give a magma of basaltic composition.
Partial melting of the descending oceanic crust (basaltic composition 45-52% SiO2) will tend to
produce magma richer in silica referred to as andesitic in composition (52-66% SiO2).
Partial melting of the base of the continental crust (andesitic composition 52-66% SiO2) may
yield magmas even richer in silica referred to as rhyolitic in composition (>66% SiO2).
The composition of magma reaching the surface at volcanoes is further complicated by how much mixing
there is between magma from different sources, and also by changes that occur during its ascent. The
latter include the settling out of crystals and contamination by absorption of lumps of crust.
Unsurprisingly, a very wide variety of volcanoes and volcanic lavas can be found above subduction zones!
There are essentially two settings in which destructive plate boundaries cause volcanoes:
NB. They form a range of volcanoes and volcanic products. The classic volcanoes formed here would be
Grey volcanoes of various types e.g. Stratocone, Composite Cone and Caldera types (see later volcanoes
case studies for full examples of types)
O-O Collision:
The first is where one oceanic plate descends below another oceanic plate. When this happens,
the rise of magma through the overriding oceanic plate leads to the construction of a series of
volcanoes. This is the origin of the volcanic island arcs in the northern and western Pacific
Ocean and in the Caribbean. Usually when determining which plate will sink it is the oldest and
coldest oceanic plate that is subducted as it is
less thermally expanded and therefore has less
buoyancy.
1. This where the subducting basaltic oceanic crust is partially
melted to form magma of an andesitic composition.
2. This is where water released from the descending oceanic crust
escapes into the overlying asthenosphere causing the peridotite
rocks in this area to partially melt, forming basaltic magmas.
122
O-C Collison:
The other is when subduction of oceanic lithosphere occurs beneath a continent, as where the
Nazca plate (oceanic crust) subducts beneath South America (continental crust). Here the
volcanoes grow on pre-existing continental crust, giving rise to an Andean-type volcanic
mountain range.
1. This where the subducting basaltic oceanic crust is partially melted to form
magma of an andesitic composition.
2. This is where water released from the descending oceanic crust escapes into
the overlying asthenosphere causing the peridotite rocks in this area t o
partially melt, forming basaltic magmas.
3. This is where andesitic rocks in the continental crust partially melt to form
magmas of a rhyolitic composition.
Italian Volcanoes
Italian volcanoes all owe their origin to the collision between the African and Eurasian plates. This
is a complex collision zone which has formed as a result of the closure of the Tethys Ocean.
Scientists have only agreed on the model over the last 10-15 years. However the distribution of the
Italian volcanoes is surprisingly simple as they form a linear chain on the Italian west coast trending
NW – SE. The Italian volcanoes can be split into three distinct regions: The Roman Province (to the
North); The Campanian Province (in the middle) – Including Campi Flegrei and Somma – Vesuvius; The
Aeolian Islands and Etna (to the South West).
Most of these three volcanic regions are caused by subduction but not in a simple way.
The collision of the African plate north westerly onto the Eurasian plate has led to an anticlockwise
rotation of this region behind the collision zone. This led to twisting which stretched and thinned
the continental crust to breaking point about 7-8 million years ago. This thinning allowed basalt to
escape from the mantle onto the basement rocks of the thinning and subsiding Tyrrhenian Basin as
the pressure on the mantle was less and the ‗asthenosphere had to rise to fill the gap‘, this caused
the melting. The crustal rocks in this region broke in three distinct orientations: North, East and
Southwest about 100km from the present Campanian coastline (near bay of Naples area). This type
of tectonics is known as triple junction. The north of Italy has continued to move anticlockwise and
123
this movement has been responsible for the Roman Volcanic province to the north (as well as
causing the 6.3 magnitude Laquila earthquake, April 2009) the southern region that includes Etna is
the result of the continued subduction of the African plate beneath the Eurasian plate in a
Northwest motion that has resulted in hydration melting and subsequent high slilica concentration in
volcanic products.
C-C Collision:
Eurasian Plate
However, it is only the
oceanic crust that can be
destroyed at subduction
zones. When both plates
contain
continental
lithosphere neither can be
subducted due to their
thickness and buoyancy
(both
2.7g/cm3).
The
edges of both continents
become buckled at first
and
then
one
will
Indian Plate
eventually get thrust over
the other. As subduction
has stopped in this situation there is no source of magma capable of rising to the surface and so
collision zones between continents are not generally characterized by volcanoes, e.g. Himalayas and the
Tibetan Plateau, however they may contain magmas at depth!
1. This is where one of the continental plates is being thrust over the other. There is no partial melting so therefore no generation of
magma to form volcanoes.
3. Conservative Plate Margins
Volcanoes do not commonly occur at conservative plate margins. This is because, although the faults
between plates can provide convenient pathways for magma to reach the surface, there is little or no
magma available as there is no subduction or thinning of the crust to allow magma generation.
4. Intra-Plate Areas
Volcanoes occur away from plate boundaries at two distinct localities:
Mid Ocean Hot Spots (e.g. Hawaii)
124
Hot spots are small areas of the lithosphere with unusually high heat flow. Slowly rising mantle rocks
(known as a mantle plume) partially melt the asthenosphere in these regions creating volcanic activity at
the surface. The movement of the lithospheric plates over the hot spot produces a chain of what are
mainly now extinct volcanoes. The most famous example of an active hot spot is the Hawaiian Islands,
but there are thought to have been 125 hot spots active in the last 10 million years. Because the heat
from these rising mantle plumes partially melts asthenosphere material (peridotite <45% SiO2) the
magma generated is basaltic in composition.
Continental Rift Valleys (e.g. East Africa)
Where two continental plates are starting
to be pulled apart the overlying lithosphere
begins to thin allowing the mobile
asthenosphere to rise into the space. The
asthenosphere is nearer the surface and
consequently under less pressure allowing it
to partially melt. A process known as
decompression melting. The East African
Rift Valley system has 14 active volcanoes,
with a wide range of magma types
depending upon whether it is the
asthenosphere partially melting or the
overlying continental crust.
The most
famous of these active volcanoes would be
the highest mountain in Africa, Kililamjaro
(4600m) which rises up in Tanzania, casting
its magnificence over the fertile plains of
Kenya to the North. Given sufficient time, the African continent might eventually split apart to form a
new emerging ocean and a new plate boundary. Due to East Africa being one of the most recent
125
geologically young areas on the continent it is in the rift valley that a diverse species diversity occurs as
the soils are fertile and rich (from volcanic eruptions) and there are many ecological niches for species
to fill.
Kilimanjaro - Tanzania
Formation of Rift Valley‟s and associated volcanism
Minor Forms of Volcanic (extrusive) Activity:
Geysers - a geyser is a volcanic spring characterized by intermittent discharge of water ejected
turbulently. The groundwater percolates deep into the basement rocks up to 2000m deep and when
it reaches hot volcanically heated rocks it is vaporized and ejected through the vent to the surface.
Many examples occur in Iceland e.g. Stokkur and probably the most famous would be Old Faithful in
Yellowstone National Park (USA)
Springs – Many volcanic areas have warm geothermally heated springs such as those associated with
volcanism in Iceland and Italy. But springs occur globally in volcanic regions and can even heat seas.
The Romans built spa towns around the clustering of these springs. A good example would be the
one on the Island of volcano that you have the opportunity to swim in this year where a disused
quarry pit and a section of the coastline around the harbour is geothermally heated
Boiling Mud – in certain situations geothermal energy from deep within the Earth heats mud pools in
depressions at the Earth‘s surface. Solfatara in Italy is a good example of a Caldera volcano that
contains this kind of boiling mud activity, although it is not as fierce as in the geological past at this
locality.
2. Geographical Distribution of Volcanoes
The majority of eruptions occur
around the Pacific Ocean, along what
is termed the Pacific "ring of fire".
Other major belts of volcanic activity
include
the
Mid-Atlantic
Ridge,
including Iceland, the Indonesian
islands, the Mediterranean, especially
Italy, the Caribbean, the East
African rift valley and the central
Pacific Ocean, e.g. the Hawaiian
Islands.
The map above shows the global distribution of volcanoes. It is possible to identify the following
features from the map.
Most volcanoes do coincide with the major plate margins.
126
A number of volcanoes occur away from plate margins - these are often referred to as intraplate or mid-plate volcanoes such as the Hawaiian Islands and East Africa rift valley.
Most of the volcanoes shown occur in linear (straight) or arcuate (curved) belts, which by and
large coincide with destructive plate boundaries. Linear belts include the volcanoes of the Andes
and the Hawaiian Islands, and arcuate belts include Aleutian Islands of Alaska and the
Indonesian volcanic island arc.
Volcanoes form in narrow belts.
3. Scale and Frequency of Volcanic Eruptions
In the same way that seismologists use the Richter Scale to provide a magnitude of an earthquake,
volcanologists use the Volcano Explosivity Index (VEI) to describe the scale of a volcanic eruption.
Like the Richter Scale, the VEI is open-ended (although no evidence exists for an eruption larger than
VEI 8) and each value represents an eruption 10 times larger than the previous value (logarithmic).
VEI 0
VEI 1-2
VEI 5
VEI 6
VEI 7
VEI 8
This applies to non-explosive eruptions. These involve the gentle effusion of low-viscosity
basaltic magma e.g. Hawaiian and Icelandic volcanoes.
These represent small explosive eruptions that eject enough debris (100,000m³) to cover
London with a thin dusting of ash
Represents a more violent explosive eruption of the more viscous andesitic magmas such
as the 1980 Mount St Helens eruption in the USA.
Represents eruptions like Mt. Pinatubo in the Philippines in 1991. This was the largest
eruption in the 20th Century and blasted out sufficient material to bury the City of
London to a depth twice that of the height of the former World Trade Centre towers in
New York.
The only historic eruption known to merit a 7 is the 1815 eruption of Tambora in
Indonesia, the largest eruption since the end of the last Ice Age - 10,000 years ago. The
Tambora eruption blasted out over 100km³ of debris - enough to bury the City of London
to a depth of 1km!
The most recent eruption to register 8 on the Volcano Explosivity Index was the gigantic
blast that occurred at Toba, in Sumatra in Indonesia over 70,000 years ago. This blast
ejected sufficient gas and debris into the atmosphere to bury the whole of Greater
London to a depth of 1km, as well cause a rapid and dramatic change to the global climate.
The Volcanic Explosivity Index (VEI) is useful only for classifying explosive eruptions. There are two
other scales commonly used by volcanologists to record and compare the sizes of eruptions.
1. Eruption Magnitude
Eruption magnitude is determined by the total
mass of material erupted during an eruption.
0
< 10,000 tonnes of material erupted
1
100,000 tonnes of material erupted
2
1,000,000 tonnes of material erupted
3
10,000,000 tonnes of material erupted
4
100,000,000 tonnes of material erupted
5
1 billion tonnes of material erupted
6
10 billion tonnes of material erupted
7
100 billion tonnes of material erupted
8
1,000 billion tonnes of material erupted
2. Eruption Intensity
Eruption intensity is defined by the rate at which
material is erupted, measured in kg per second.
3
eruption rate of 1 kg/second
4
eruption rate of 10 kg/second
5
eruption rate of 100 kg/second
6
eruption rate of 1,000 kg/second
7
eruption rate of 10,000 kg/second
8
eruption rate of 100,000 kg/second
9
eruption rate of 1 million kg/second
10 eruption rate of 10 million kg/second
11 eruption rate of 100 million kg/second
12 eruption rate of 1 billion kg/second
127
Frequency of Volcanic Eruptions
Nearly 600 volcanoes have had historically documented eruptions, and about 1,500 are believed to have
erupted at least once during the past 10,000 years. In an average year about 50 volcanoes erupt, and at
any one time there are likely to be about 20 volcanoes currently erupting. Only about 1% of volcanic
eruptions in the last 100 years have caused fatalities.
Although some volcanoes erupt only once, the lifetimes of most volcanoes, from their first eruption to
their last, are typically hundreds of thousands to a few millions of years. There are thus long intervals
of quiet between eruptions. Generally speaking, the longer the interval between eruptions, the larger
the eruption! Also larger eruptions tend to be much rarer than smaller ones. For example, every 1000
years there is likely to be about 100 VEI 5 eruptions, about 10 VEI 6 eruptions, and only about 1 VEI 7
eruption. The average interval between VEI 8 eruptions somewhere on the globe may be as long as
100,000 years, if you are lucky! (See Super-volcanoes.)
Supervolcanoes
Hidden deep beneath the Earth's surface lie one of the most destructive and yet least-understood
natural phenomena in the world - supervolcanoes (VEI 8+). Only a handful exist in the world but when
one erupts it will be unlike any volcano we have ever witnessed. The explosion will be heard around the
world. The sky will darken, black rain will fall, and the Earth will be plunged into the equivalent of a
nuclear winter.
Normal volcanoes are formed by a column of magma - molten rock - rising from deep within the Earth,
erupting on the surface, and hardening in layers down the sides. This forms the familiar cone shaped
mountain we associate with volcanoes. Supervolcanoes, however, begin life when magma rises from the
mantle to create a boiling reservoir in the Earth's crust. This chamber increases to an enormous size,
building up colossal pressure until it finally erupts.
The last supervolcano to erupt was Toba 74,000 years ago in Sumatra. Ten thousand times bigger than
Mt St Helens, it created a global catastrophe dramatically affecting life on Earth. Scientists know that
another one is due - they just don't know when... or where.
It is little known that lying underneath one of America's areas of outstanding natural beauty Yellowstone Park - is one of the largest supervolcanoes in the world. Scientists have revealed that it has
been on a regular eruption cycle of 600,000 years. The last eruption was 640,000 years ago... so the
next is overdue.
And the sleeping giant is breathing: volcanologists have been tracking the movement of magma under the
park and have calculated that in parts of Yellowstone the ground has risen over seventy centimetres this
century. Is this just the harmless movement of lava, flowing from one part of the reservoir to another?
Or does it presage something much more sinister, a pressurised build-up of molten lava?
Scientists have very few answers, but they do know that the impact of a Y ellowstone eruption is
terrifying to comprehend. Huge areas of the USA would be destroyed, the US economy would probably
collapse, and thousands might die.
And it would devastate the planet. Climatologists now know that Toba blasted so much ash and sulphur
dioxide into the stratosphere that it blocked out the sun, causing the Earth's temperature to plummet.
Some geneticists now believe that this had a catastrophic effect on human life, possibly reducing the
population on Earth to just a few thousand people. Mankind was pushed to the edge of extinction... and it
could happen again.
128
4. Effects of Volcanoes
Volcanic hazard effects can be classes as primary or secondary. Primary effects include lava flows,
pyroclastic flows, ash and tephra fall, and volcanic gases. Secondary effects include flooding resulting
from melting icecaps and glaciers, lahars (mudflows), landslides, tsunamis, disease and famine.
(i) Primary volcanic hazards:
Pyroclastic flows: are huge clouds of very hot gases (up to 1000 degrees C) charged with all types of
volcanic fragments, which rush downhill at speeds of 100 km/hour. Little in their path survives. Good
examples occurred during the eruption of Pinatubo (1992), Montserrat (1996) and Kratatoa (1883).
Air fall tephra: includes all solid material, of varying grain size that is erupted and later falls to the
ground. It includes material such as volcanic bombs and fine dust.
Lava flows: vary in mobility according to their composition, but the most mobile basalts can travel at
speeds of up to 50 km/hour and can overwhelm people and buildings within a very short time.
(ii) Secondary volcanic hazards:
Volcanic gases: are released in eruptions, and encompass an astonishing variety of chemical
compositions. Only rarely, as in the case of Lake Nyos, are they the direct cause of volcanic disaster.
Lahars: are volcanic mudflows. A mix of water, from heavy rain, melting ice and snow, or from draining
crater lakes, and soft volcanic ash can have a devastating effect as it moves downhill, at speeds of up to
80 km/hour.
An awful example of the destruction lahars cause can be seen from the November 1985 eruption of
Nevado del Ruiz eruption in Colombia which lead to the loss of 23,000 people in the town of Armero
who were covered by thick mudflows travelling in excess of 350mph as they slept. The volcanic eruption
melted a glacial ice cap high up on the volcano, the subsequent melt water triggered the mudslide.
Amazingly, Amero has been hit by numerous lahars in the geological and historical past, however as so
often happens in the Developing World memories are short and reconstruction is rapid so people simply
don‘t learn or don‘t know how to learn from history. Many of the vital traces of pervious lahar deposits
were missed as the area was not extensively mapped and land-use planning was not adhered to. To
compound the problem people in such volcanically active areas are attracted to the volcanic slopes in the
valleys as they contain rich and fertile soils caused by recent lava and mudflows ideal for growing crops.
Landslides occur when particularly viscous material is injected into the upper structure of a volcano.
This sets up stresses that result in huge fractures in the volcano, and the structure can later collapse,
as occurred in the case of the 1980 Mount St. Helens eruption when an earthquake triggered landslide
caused the north flank to collapse causing a lateral blast eruption.
Tsunamis are more associated with earthquakes than with volcanic eruptions, although the tsunami
resulting from the eruption of the volcanic island Krakatoa which lies between Java and Sumatra in the
Sundra Strait. The tsunami drowned some 30 000 people in 1883 as it rippled over the sea to Sumatra
and Java. The eruption also triggered pyroclastic flows.
Volcanoes have major impacts both on people and on the physical environment. These effects can
classified as follows:
129
Physical Environment
Volcanic blast destroys trees.
Lava flows burn vegetation and kill trees.
Pyroclastic flows scorch every living thing in their path.
Ash and tephra fall blanket surrounding area destroying vegetation and globally block out sun
causing climate change.
Mud flows (lahars)
Landslides remove vegetation & reduce slope angles.
Tsunami can cause catastrophic flooding.
Built Environment
Loss of communications (roads, elevated highways, railways, bridges, electricity lines).
Building collapse from ash fall or set on fire bt lava flows and pyroclastic flows.
Loss of tourist facilities such as airports
Human Environment
Death and injury. You could even argue psychological damage.
Destruction of homes causing homelessness.
Loss of factories/industry causing a loss of livelihood and unemployment.
Loss of crops from ash fall causing reduction in food supply. Wet ash is heavy and caused flat ‗tin
roofs‘ to collapse.
Loss of communications hindering rescue/emergency services and rebuilding.
Volcanoes cause real harm, destruction and are a real source of misery/suffering in many people‘s lives!
5. Factors controlling the degree of damage:
more than others?
i.e.
Why do some places suffer
The table below lists some notable eruptions that
have killed people during the past 2,000 years, and
indicates the principal causes of death in each case.
Although not comprehensive, it does include all the
eruptions of the past 500 years known to have
killed more than 5,000 people (the main cause of
death is listed first).
The table shows that volcanic damage is not the
same everywhere. For instance it is not adequate to
simply say as the VEI index of an eruption occurs
so does the damage and destruction. For instance
many small VEI eruption can still be damaging
depending on a number of factors that affect the
degree of damage: e.g.
type of volcano
type of eruption
type of associated hazards
population density
level of economic development
Community preparedness
130
6. Responses to Volcanoes
People respond to volcanoes and the threats they pose to human life and possessions in a way that is
designed to reduce the risks. This response can occur at a range of levels, from the individual and local
communities to national or international level. The response(s) chosen, if any, will depend upon: the
nature of the hazard; past experience of hazardous events; economic ability to take action:
technological resources; hazard perceptions of the decision-makers; knowledge of the available options,
and the social and political framework. People and organisations may not adopt all the available
strategies, since resources of time and money are needed for this. The relative importance of the
threat from natural hazards compared with other concerns such as jobs, money, education, for
individuals or governments, will be major factors.
The range of responses available can be divided into 5 broad groups:
Avoidance (very reactive approach)
Modify the event (Control and protection)
Modify the vulnerability (Preparation and prediction)
Modify the loss (Aid)
Acceptance (very fatalistic approach, but often a common one for populations living in hazard
prone areas, especially LDC.s)
The first approach is not always possible and the last is not always advisable! It is the various ways of
modifying the event, the vulnerability and the loss that are the most often used reponses.
Modify Event
Control and protection
Very little can be done to control a volcanic eruption. lava flows are the only primary hazard which
people have attempted to control with any success. Two methods have been used - water sprays and
explosions. Sea-water sprays were successfully used to cool the lava flows during the 1973 eruption
of Eldafell on Heimaey, Iceland, to protect the harbour of Vestmannaeyjar. Explosives were used
with some success in the 1983 eruption of Etna, when 30% of the slow-moving lava flow was diverted
from its course. Artificial barriers have been proposed to protect Hilo, Hawaii, from future lava
flows. Barriers have also been used to protect against the secondary hazards of lahars, which tend
to follow well-defined routes. In Indonesia,, some villages have artificial mounds to enable villagers
to escape to higher ground, although adequate warning is needed if this is to be effective. A tunnel
through the crater wall of Kelut volcano, Java, has also been tried to drain the crater lake and
reduce the risk of lahars forming.
the drainage of crater lakes to remove a source of lahars e.g. Kelut, Indonesia.
the strirring up of crater lakes to prevent carbon dioxide concentration reaching critical levels
e.g. Lake Nyos, Cameroon.
the diversion of lava flows e.g. Mt. Etna, Italy.
the control of lava speed e.g. Heimay, Iceland.
the building of barriers to protect against pyroclastic flows.
Building and structure design can do little to resist lava, pyroclastic flows and lahars, since these
volcanic hazards will destroy any structure in their path. Ash fallout has the largest spatial impact,
and design may help reduce its impact. The weight of ash on roofs, especially it is wet, can be enough
to cause roof collapse. Roofs need to be strong and designed to shed ash, with steep-sloping sides.
In Hawaii, the ultimate hazard-resistant design in timber houses allows residents in high-risk areas
from lava flows to move their homes if necessary!
131
Modify the vulnerability
(a)
Preparation and prediction
Community Preparedness
Most volcanic events are preceeded by clear warnings of activity from the volcano. If the community at
risk is prepared in advance, many lives can be saved. Evacuation is the most important method of hazard
management used today. Evacuation of the area at risk can save lives, but advance preparation and
management structures to organise the evacuation, temporary housing, food, etc. are needed. The length
of time of the evacuation may be long term: for example, 5000 residents of Montserrat were evacuated
three times between December 1995 and August 1996 for periods of 3 months, to avoid pyrocla stic
flows and ashfall. By November 1996, the disruption was thought likely to continue for a further 18
months. The scale of evacuations can be huge. In 1995, volcanologists and civil defence officials drew up
an emergency evacuation plan for the 600,000 people at risk from an eruption of Vesuvius. the operation
is large scale and involves removing some people to safety by ship. If the people involved panic, the plan
would be useless. People need to be clear about the risk and how to behave during an event.
Evacuations have been successful in recent year and are the most common hazard-management strategy.
Examples include the 1991 eruption of Mt Pinatubo in the Philippines where 250,000 people were
evacuated, and undoubtedly saved many lives (only 50 people were killed by the primary hazards during
the eruption). The Galunggung eruption of 1982 in Java, Indonesia, involved the evacuation of 75, 000
people, and a relatively small fatality total of 68. Compare this with the 1985 eruption of Nevado del
Ruiz, where the Colombian government did not have a policy in place for monitoring volcanoes or for
disaster preparedness. Communications between the scientists monitoring volcanoes and government
officials must be clear, consistent and accurate. The eruption was expected and scientists monitoring
the activity had produced a hazard map. However, the lack of clear communication and indecision
resulted in disaster (23,000 people lost their lives). The Colombian government had more serious
immediate problems - economic crisis, political instability and narcotic cartels- to deal with.
Knowledge of volcanic processes is incomplete, but there have been great strides in forecasting and
predicting eruptions. Various physical processes can be monitored for changes which can signal an
impending eruption. The record of past eruptions is also used to help determine what and where the
risks are highest. At the present time, only 20% of volcanoes are being monitored. As might be expected
this is mainly in countries such as Japan and the USA which have the researchers, technology and cash
to undertake these activities. Japan possesses 7 volcanic observatories and 15 observation observation
stations. Kilauaea on Hawaii has been monitored for over 40 years, and the Mt St Helens eruption was
probably the best documented. Volcanoes in Kanchatka were carefully surveyed by the former Soviet
Union for nearly half a century.
Some of the physical parameters that can be monitored around volcanoes include:
1. Seismograhic Monitoring.
Magma rising within the Earth's crust will set off a series of earth tremors. When the frequency and
intensity of these seismic disturbances increases markedly, magma is approaching the surface.
Sometimes harmonic tremors occur: these are a narrow band of nearly continuous vibrations dominated
by a single frequency. Such tremors were recorded at Mt St Helens, Mt Pinatubo and Nevado del Ruiz.
However, problems exist with forecasting as the lead-time may vary considerably, from a few days to a
year, and there is no certainty that an eruption may follow.
132
2. Tiltmeters and ground deformation.
Rising magma will often result in ground deformation, often in the form of a bulge in the profile of the
volcano. Tiltmeters can measure the amount of deformation accurately - to 1mm in 1km. In Japan, Mt
Unzen's summit bulged out by 50m before it eruption 24 May 1991. The now notorious bulge on the
slopes of Mt St Helens was a feature that could be seen with the naked eye, although when the eruption
occurred the bulge itself was no longer increasing.
3. Gas and steam emission monitoring.
Volcanic eruptions appear to be preceded by increased emissions of gas and steam from fumeroles and
the crater. Increases in the range of gases, such as hydrogen fluoride and sulphur dioxide can be
detected. Increased dissolution of acid volcanic gases will lead to increased pH values in crater lakes and
other crater pools (the summit crate lake of Nevado del Ruiz virtually contained sulphuric acid before
its eruption in 1985).
4. Other indicators.
As the volcano is about to erupt, more heat is emitted. Thermal anomalies can be detected in the ground
and in various water bodies associated with the volcano. Furthermore, heating will disturb other
properties of volcanic rock, and thus magnetic, gravitational and electrical anomalies can be detected.
Land use planning can be a very important management tool once an agreed volcanic hazard map is
produced. It is still difficult to predict in the long term the timing and scale of volcanic eruptions. Many
poorer countries do not possess the maps and past records needed to produce accurate hazard
assessments, but where they do exist they can be used to plan land uses which avoid high-risk areas or
would result in a reduced economic loss. These need to be enforced through legislation and education. As
part of the US hazard programme, lava flow hazards on Hawaii have been mapped and can now be used as
the basis for informed land-use planning.
(b)
Aid
Aid for volcanic hazards comes in 2 main forms: technical aid for monitoring and forecasting, and
financial/goods aid.
Technical aid is usually supplied by richer countries experienced in volcanic eruptions. This involves the
use of high-coast monitoring equipment and expertise to try to forecast events. Financial and other aid
is used as a strategy during and after the event. This may need to occur over a long period compared
with other natural hazards, since eruptions may continue for months at varying levels of activity. For aid
to be an effective management approach, governments must be willing to ask for and receive help from
other nations. Indonesia has much experience with volcanic eruptions and has developed a high level of
hazard mitigation within its financial resources. This involves monitoring of volcanoes and planning for
how aid will be used.
7. Factors Affecting the Responses to Volcanoes
(i)
Management of Mount Etna, Sicily: An MDC example: Successful
management of lava flows.
Volcanism is common place on the Island of Sicily and on surrounding islands, therefore this area has
been well studied and awareness of the volcanic hazard among the local community is good and
community preparation and decision making made by central government is usually well thought out and
comprehensive. People in these areas don‘t fear or hate the threat that volcanoes bring, quite the
133
opposite. On Sicily, the population are proud of their volcano Etna or Mongibello (Mountain of
Mountains), it brings them rich pickings in terms of tourism and olive, lemon and orange groves as well as
vineyards that produce some of the finest produce in the Europe and making wealth for the community
thanks to the fertile volcanically derived soils.
Etna owes its origin to an underwater volcano that
erupted half a million years ago near Aci Castello (as pillow lavas found here). The conduit (vent) has
repeatedly moved north over subsequent years.
Nature of the Hazard
At 3350m high, dwarfing Catania and most of eastern Sicily, 4 vents from the summit as well as flank
eruptions threaten the inhabitants. These flank eruptions are often from parasitic cones known as
‗hornitoes‟.
Etna is a composite cone strato volcano which erupts low silica basaltic lavas. The lavas
are low viscosity and tend to be aa (pronounced are are means blocky) but some is of the ropey pahoe
hoe type, as they are low viscosity they travel large distances.
Etna is a two headed monster, threatening on the one hand but tourist attraction on the other with many
tourists coming to witness the volcanism and to ski on its slopes. About 20% of Sicily‟s population live
within reach of Etna – soils fertile, orange groves, lemon groves – good lifestyle for people who live here.
Many eruptions on Etna are not from the main crater but take the form of flank eruptions. Here they
are the most destructive as they release low viscosity lava low down the slopes!! The famous 1991
eruptions were from the flanks. In its history Etna has displayed a wide variety of eruption styles
ranging from minor eruptions with one of more explosive nature. Other hazards recognised from past
eruptions are: gas emissions; seismic activity associated with rising magma and cracking as the volcano
expands/contracts; Ashfalls and dust; violent Phreatic eruptions which are driven by water entering the
system being released as steam, water, ash and volcanic blocks/bombs. Flank Collapse has also occurred
on Etna.
Eruption History, Responses and Impacts
In 1313 the city of Catania was invaded by lava. The cooled flows make the foundations for the medieval
watchtower that overlooks the coast. In 1669, the town of Nicolosi was engulfed by extensive flows.
In 1928 numerous eruptions occurred and the town of Mascali was destroyed in just two days, people
save building materials even tiles and bricks. In 1983 trees burned and years worth of farmers work
destroyed by lavas. May 14th that year, after 10 days of preparations the authorities claim first victory
of controlling Etnas flows. After 2 months of signs of life, a lateral eruption occurred. Natural barriers
and tunnel dug were a crucial step that diverted the flows. Again in 1989, the National institute of
volcanoloy in Catania warned a dangerous fissure was opening on Etnas SE side. GNV monitor 24
hrs/day, it was not the high altitude lava flow that were worrying officials, they were worried about the
fissure opening at lower altitude down on the flanks of the volcano. A fault reached 1500m causing a
deep crack in ground containing wall and main road both split. Evacuation planned but no lava emerged
that year so the evacuation never needed to be issued. This illustrated the cat and mouse nature of
managing Etna!
The Famous 1991-1992 flows
1991, December Flank eruptions on eastern flank sourced from an outpouring high up on the Valle de
Bove (1500-2000m) catchment area for lava 30m 3/s. Four months later in April the lava was still
pouring out and threatening the town of Zafferana. TV news crews descended on the small town. Local
officials became pressurised by a frightened community in fear of losing their livelihoods to the flows.
They knew sooner or later that the slowly advancing aa (blocky) lava would destroy the parts of the
town.
Within 2 weeks of eruption starting the lava flow was advancing on the inhabited area of
Zafferana. The reason the lava was so hard to control was because it was flowing inside a lava tube
structure with a crust on top and to the sides which had already solidified. Lava tubing meant that the
lava remained hotter for longer as it was insulated form the cooling effect of the air at the surface,
hence it flowed larger distances. The lava is at 1000oC when it first erupts, the lava channels crust over
134
usually at the sides forming levees, but in this case the lava crusted over on top as well. The National
Volcanology Group‟s experience paid off this time. The under took the following actions:
1) Earthen barrier 30m high, 160m long constructed in order to constrain the flows, many were built
behind Zafferena. Many did not survive and became soon overtopped as flows travelling a rapid 50m/hr.
They were built across the lava tubes that fed the flow. Tubing meant that even after 7km distance the
lava only cooled by 20oC. Computer simulations were used to predict path of flow ahead of time.
By mid April the lava was getting nearer the town centre and remote farm buildings on the Northern
edge of the town buildings were being destroyed. Some people felt that the Italian government had not
done enough to protect the interests of the people of Zafferena as the population is quite peripheral to
the central government. Discontentment can be summerised by the grapiti on one outbuilding which read
Grazie Governo (Thank-you government).
2) Concrete blocks used, the US air force flew in from nearby airbase using helicopters, dropped near
the source of the flow with the intention of diverting the flow. Some blocks exploded due to the high
temperatures and some simply floated away. The idea was to block the main tube causing the lava to
back up and overflow into another part of the volcano flank. Again unsuccessful! Police kept people out
of the advancing lava front area as the Italian Navy prepared to drop explosives to divert the flows.
3) Explosives: The explosives worked allowing the lava to be diverted onto another part of the volcano
that was uninhabited. First explosion was at 2000m, at this point Government state of emergency was
issued when the flows were only 5km from Zafferena. A lava stream that flowed above ground for a few
meters was targeted. 6 tonnes of explosives used to blowing up the south side of lava canal into an
overspill channel to divert and cool it and using bulldozers to ‗shove‘ blocks of concrete/steel into the
lava tunnel to seal it off. Police evacuated a 2km radius before the explosion. May 27th 20-30 % of lava
enter the tunnel and the rest (70%) into the specially constructed over flow channel. Bulldozer used to
seal rest of tunnel off. Lava flow stops – Zaffrena was saved. Enough lava was extruded in this eruption
to fill more than 50 Wembley stadiums!
A shrine to the Madonna (Mary) now marks the car park which has now been replaced by a larger statue
near the 1991-1992 lava front where the advancing lava was stopped in its path. The geologists take
credit for this victory but others see it more as divine intervention from god!
The local population still feel as if they have been robbed of some beautiful places and areas and are
always on alert. After this event the Italian government pledged an $8 million of financial assistance
for villagers and tax breaks in order for the community to recover, but was this enough?
2001 Fissures opened up in vicinity of tourist structures. As low as 2100m, lava flowed immediately
down the south east side of the mountain. Rifugio Sapienza was destroyed and rebuilt. In the car park
at the foot of the mountain some of the souvenir shops are on wheels so they can move them during an
eruption. The cable car station was destroyed in 1985 and again in 2001. The eruption only lasted 23
days but will be remembered for its strength (strombolian eruptions) and images it offered reporters.
July 17, 2002 main eruption. Artificial banks built using JCB‘s to save the cable way station the
tourist structures and the Sapienza refuge from the wall of ‗advancing fire‘.
Delaying liquid sprayed
on vegetation to prevent fires on the forested slopes. Beginning of August, near Nicolosi the lava
slowed to 100m/day. 23 days after start of eruption, danger was still imminent, eventually stopping 4km
away from the town. Both the Cable Way Station and Refuge, were saved by saved by man‘s
interventions. Tourist never stopped even stopped coming! Pneumatic dills were used to break up white
hot lava in order to get the car park operational again. 600 m further up from the car park and refuge
the top cable car station was burned out. Further eruptions have occurred form the flanks as recently
as May 2008 and since 2002 Etna has erupted every year!
135
It is estimated that all of the 77 deaths on Etna in recent history have been due to people ignoring the
safety guidelines at the crater and ignoring tour guides advice by straying into hazardous regions (e.g. 2
tourists killed in 1987 by explosion). The other are from events such as lightening strikes, indicating
how successful Etna‘s management really is!
Monitoring & Prediction:
(Gruppo Naxionale Vulcanologia) Co-ordinates volcanic research activity and predicts eruptions at a
month ahead of time on Vulcano, Stromboli Etna and Vesuvius. They monitor gas emissions, vapour
emissions, chemical composition of springs, seismic activity and ground deformation to give accurate
predictions. Seismometers set up recording vibrations and looking for violent harmonic tremors that
indicate an eruption, set up near central crater. They know when Etna will erupt but don‘t know where.
In order to work out at what part of the flank will erupt scientists set up equipment to monitor the
volcanoes changes in physical geometry. Using tiltmeters and surveying equipment (levelling). They
measure before and after eruptions. Before an eruption, the ground swells and the fixed reference
points become further apart. After an eruption the ground sinks and the lines return closer together.
They have done this over the last 20 years.
Fractures made from this expansion and contraction, allows lines of weakness to be set up in the volcano
and it is probable that these fractures will be sites where dykes of magma intersect surface. Hence
these will be the sites of sources of future flank eruptions. Mapping them is therefore of paramount
importance! Most of these dykes radiate out from the centre of the volcano. The cliff above the
eastern flank of the Valle de Bove is gradually expanding and is becoming detached from the rest of
the volcano at a rate of 5m in ten years. When it gives way a catastrophic landslide would result similar
to St Helens threatening towns and villages below (hopefully not when we are there).
Sketch diagram showing Etna‟s main features
136
(ii)
Management of Mount Pinatubo, Philippines: An LDC example where „Benefits
of Volcano Monitoring far outweigh the costs‟
Nature of the Hazard
Pinatubo is an active strato volcano caused by O-O subduction on the Philippine Island of Luzon which
had remained dormant for much of its recent history. Before the 1991 eruption the mountain was
densely forested which provided territory for native Aeta tribal people and many village communities
were long established on its flanks.
The eruption on the morning 15 th of June, 1991
was an Ultra Plinian eruption, the second largest
eruption of the 20th Century. However, it was
the largest volcanic event to affect a heavily
populated area that century! The eruption was a
VEI 6 event that ejected 8km3 of ash
(10 times that of Mt St Helens).
The
catastrophic eruption turned day into night over
much of Luzon, with the most violent phase of
the eruption lasting over 10 hours. The volcano
had not erupted in order of this magnitude for
500 years as evidence seen from the deposits in
the geological record.
Fragments of burned
A typical Strato Volcano
trees trapped in previous pyroclastic flows were
Radiocarbon dated and confirm this age. Records could be found from three events namely; a one
thousand year active phase from 4410-5100 yrs BP; a 500 year active phase from 2500-3000 yrs BP and
most recently a 200 year active phase from 400-600 yrs BP.
Impacts of the Event
Because the eruption was forecast by scientists from the Philippine
Institute of Volcanology and Seismology and the U.S. Geological
Survey, civil and military leaders were able to order massive
evacuation and take measures to protect property before the eruption.
Thousands of lives were saved (5000) and hundreds of millions of
dollars (at least $250 million) in property losses averted. The savings
in property alone were many times the total costs of the
forecasting and evacuations!
Falling ash blanketed an area of thousands of square miles, and
avalanches of hot ash (pyroclastic flows) roared down the slopes of the
volcano and filled deep valleys with deposits of ash as much as 600 feet
thick. Before the cataclysmic eruption, about 1,000,000 people lived in
the region around Mt Pinatubo, including about 20,000 US military
personnel and their families based at the Clark air force base (25km
east of summit) and the Subic bay Naval Base (70km to the south of the volcano).
The slopes of the volcano and the adjacent hills and valleys were home to thousands of villagers.
Despite the great number of people at risk, there were few casualties in the June 15 eruption.
Management/Responses
This was not due to good luck but rather was the result of intensive monitoring of Mount Pinatubo by
scientists with the Philippine Institute of Volcanology and Seismology (PHIVOLCS) and the U.S.
Geological Survey (USGS). It could be argued that such monitoring would not have been so successful
and more damage and destruction as well as a greater death toll would have resulted from the same
eruption if the USGS were not involved in such observations. Cynics feel that the reason for the
137
prevention of a large scale humanitarian disaster on Luzon is not one of underlying compassion or
protection for the people of the Philippines, but rather that the US were forced to act in order to
protect their own interests (i.e. military bases)
The first signs of the volcano awakening from its 500 year
sleep came in the early April when steam and ash blasts
were observed as they coated villages 10km away and
PIVOLCS based in Manila were quick to respond and soon
began monitoring the activity using a portable
seismometer. This initial activity triggered the USGS to
be called in and they helped set up the Philippine Volcano
Observatory (PVO) at the Clark Air Force Base. It was at
this stage that a simple 1-5 alert level system was
adopted to inform the local population of the likelihood of
an eruption. The USGS learned many lessons from Mt St
Helen‘s 11 year earlier and some on the team of geologists
were veterans of this event and were not going to make the
same mistakes whereby people ignored the evacuation
warnings by entering the evacuation zone. At this point an
alert level 2 was issued, warning magmatic intrusion could
lead to an eventual eruption and 2000 people living within
10km of the summit were evacuated.
By late April the PVO scientists made up of both US and
Philippine scientists mapped the pyroclastic deposits
surrounding the volcano and then published a hazard map
which became distributed by the government, in this
manner the local population were made aware of the
potential risks they faced from future events that may
unfold.
By mid May tectonic tremors 10km beneath the volcano
were recorded and they were interpreted as being due to
rock fracturing as the volcano ‗bulged‘ up and outwards. At
Location of Mt Pinatubo and the Philippines
around the same time the scientists used a correlation
spectrometer (COSPEC) to measure the gas emissions
coming from the volcano and soon became alarmed to find that the SO2 levels had increased to 500
tonnes/day. Several new seismometers were installed that recorded nearly 2000 minor quakes and
Tiltmeters were also set up to record ground deformation.
On the first of June the seismic activity changed to a shallower focus less than 5km deep and beneath
the summit. Small explosions of ash soon occurred a few days later, accompanied by Harmonic (Volcanic)
tremors suggesting that rising magma was filling the magma chamber at shallower depth. Around the
same time COSPEC measurements decreased to 260 tonnes/day which indicated that the volcano vent
had become blocked as the system was not degassing allowing dangerous pressure build up. It was at
this time that an alert level three was issued and the eruption was now expected within two weeks.
The scientists only had one chance to get it right, they knew if they issued such a warning and people
evacuated only to find no eruption occurred then they would soon return to the danger zone exposing
the population to increased risk.
On the 6th of June tiltmeters indicated budging on the crater and the next day a column of ash and
steam was ejected 5km into the atmosphere. At this stage an alert level 4 was issued warning of an
explosive eruption within the next 24 hours. 1200 people were evacuated within 21km of the summit.
138
By the 8th of June a plug of viscous dacite magma was seen at the surface during a helicopter flight.
The following day harmonic seismic activity increased to terrifying levels and equipment indicated the
eruption was about to go ‗linear‘! A large explosion followed sending an 8km eruption column into the
atmosphere. A stage 5 was subsequently issued indicating the eruption was now in progress.
By June the 10th COSPEC measurements were as high as 13 000 tonnes/day and the full scale evacuation
of the Clark Air Force Base was now in progress, with only a skeleton 1500 security and maintenance
staff remaining. On the 12th of June two further eruptions send ash soaring 19km in height and the
evacuation zone was extended to 30km from the summit. Manila airport closed and more staff left
Clark. Two days later on the 14th a 40km major eruption occurred and the remainder of the Clark base
and those scientists still at PVO fled in blind panic through the pitch black ash cloud to central Luzon.
Further observation of the eruption was now made difficult because of the ash cloud.
Finally on the 15th of June the Volcano did
more than clear its throat and delivered an
eruption of increasing intensity which hit
VEI6. Huge pyroclastic flows roared down
the volcano flanks as the eruption column
collapsed, All recording equipment was
destroyed and the main crater collapsed.
Ash became widespread over large areas
and many roofs collapse as the weight of
the ash increased considerably as heavy
rain from Typhoon Yunga increased its
density.
Many poorly constructed
buildings with flat roofs suffered the
most. 1200 people were killed in the event
but only 359 as a direct result of volcanic
hazards associated with the eruption.
Most of these lost lives due to airfall. The
evacuation zone was increased to 40km and
250,000 people were displaced, this fell to
200,000 within 3 months. 8km 3 of ash
released (10 times that of Mt St Helens).
By 4th September the alert level was reduced to 3 and the evacuation zone was reduced to 10km.
However, although the main eruption was over and it was unlikely that another VEI 6 would occur, the
problems facing the Philippines was far from over. Up to 50 miles from crater widespread crop failure
occurred due to ash deposits causing widespread food shortages. Half of all livestock died meaning real
suffering and misery for many local farmers. A ten year pattern of global cooling of 0.5 oC recorded as
fine ash blocked out suns radiation. Lahars with a concrete like density occurred for months after the
event and caused widespread disruption. These mudflows caused widespread silting up of rivers which
led to frequent flooding many years after the event. High river discharge and bankful levels led to
landslides washing away poorly built houses. Property damage continued for a decade. A new crater lake
and subsided caldera 2.5 km wide was formed and now stand 1485m ABSL some 260m lower than the
previous summit height! The US abandoned the Clark Air Force Base and one year later the Philippine
government use the momentum of the disaster and the withdrawal from Clarke as a political excuse to
139
withdraw permission for US to have military naval presence at Subic Bay, ending the 100 year US
military presence.
In total 650,000 jobs were lost and 50,000 homes were damaged, but we must not ignore the 5,000 lives
saved, along with the $250 million saved in economic damage. At least another $50 million was saved by
moving aircraft and other equipment to other bases on Hawaii. Other commercial savings are harder to
quantify but are probably less than $100 million, but other personal savings in terms of property and
possessions of sentimental value are much harder to affix monetary value. Nor can we simply assess the
effectiveness of the management based on monetary value alone; a value can‘t be placed on a life,
although some cost benefit analysis use a figure of $100, 000/life.
The cost of the management was in the region of $56 million. 1.5 million was spent on the scientists and
their equipment/support. Around $15 million on previous work in the years before the eruption, issued by
USGS and PIVOLCS. Governments and nongovernmental organizations together spent about $40 million
to evacuate, house, and feed local residents and American military personnel and their dependents.
Therefore, it is safe to conclude that the management of Pinatubo was effective in terms of mitigating
against the worst impacts of the event as well as receiving value for money!!! Management ensured that
the damage and destruction was less than it would have otherwise been avoiding the region sinking into
poverty! At a conservative estimate the benefits of taking management action are probably in the order
of 5 times that spent on management, therefore value for money! Lessons could be learned from
Pinatubo as it illustrates effective management lessens the impact. Although management of subsequent
eruptions in other places may not achieve the level of savings realised at Pinatubo!
140
C
hapter 3: World Cities
1. The Global Pattern of Megacities, World Cities and Millionaire Cities
One of the key trends of the 20 th century has been the rapid urbanisation of the World‘s economy,
that is to say there has been an increase in the proportion of the World‘s population living „urban‟!
Urbanisation has been a demographic change which started last century which is still accelerating in
some areas today it the 21st Century and it has caused profound economic, social and environmental
changes that society has had to adapt to.
Urbanisation: The growth in the proportion of a countries population that lives in urban as opposed
to rural areas. The term is sometimes used less accurately to describe the process of moving from
the rural to an urban area.
Tokyo Skyline by Night: A Mega, Millionaire World City
Tokyo from space: Source NASA
Li st of Worl d‘ s Bi ggest Ci ti
es 2 00 8 :
1 City (Tokyo)
5 Cities (NY, Mexico City, Seoul, Mumbai, San Paulo)
21 Cities
41 Cities
31 Cities
38 Cities
81 Cities
360 Cities
380 Cities
30 million people (probably 37 million)
20-30 Million People
10-20 Million
5-10 Million
4-5 Million
3-4 Million
2-3 Million
1-2 Million
0.5-1 Million
Megacities:
History:
The term megacities is new but the phenomena is clearly not as even the Greeks regarded their
megalopolis as a very big place as even Athens in 432BC pushed 300, 000 (alarmingly large to people at
the time). Rome however was another matter, it was much bigger, far more serious concern, and it can
be considered in historical terms as a trailer for things to come in the future. At around 400AD Rome
had a staggering population by most estimates of nearly 1 million, however some experts place it is high
as a whopping 1.4million! This lead to the city officials in the day to have to devise complex systems of
141
refuge collection, international food supply, long distance water transfer and even devise systems of
traffic management! Amazingly, the Roman official‘s faced the similar population pressures facing
modern city officials and governments today.
Constantinople may have equalled ancient Rome in the Middle Ages, Peking in the early modern period;
but, some time just after 1800, London became indisputably the greatest city that had ever existed in
the world. And it began to expand at a dizzy rate, establishing a precedent that would be followed, all
too often, first by North American and Australasian cities in the nineteenth century, then by the
cities of the developing world in the twentieth. The population of the area that later became the
Metropolitan Board of Works and then the London County Council rose from 959,000 in 1801, passing
the one million mark ten years later, to reach 2,363,000 in 1851, more than a doubling; it then doubled
again, to 4,536,000 in 1901. But by the start of the twentieth century, the LCC area was already
inadequate as a description of the real London: that real London was Greater London, a statistical
concept that happened also to coincide approximately with the Metropolitan Police District, which had
more than doubled from 1,117,000 in 1801 to 2,685,000 in 1851, but had then increased no less than two
and a half times to 6,586,000 by 1901: a truly prodigious rate of growth. Even by 1801, Greater London
had more than 12 per cent of the population of England and Wales; by the end of the century, over 20
per cent. By 1885, as was pointed out at a meeting of the Statistical Society, London was by far the
largest city in the world: its population was larger than that of Paris, three times that of New York or
Berlin within their then limits. London did not enjoy this status for long and it was soon overtaken by
cities in a tremendous rush to expend such as New York that quadrupled in size in the 40 year period
from 1.4 million in 1898 to 7.45 million in 1940. During this period NY went from being the 3rd largest
city to being the 1st largest. By 1950 NY was recognised as the first megacity!
Mega Cities: Now a LEDC trend
London and New York kept some kind of global preeminence after that, of course, into the 1950s, when
the growth first of western cities like Los Angeles, and then of the great cities of the developing
world, far overtook them. Since 1950, propelled by high rates of natural increase and internal
migration, many cities in this group have grown to be numbered among the World's largest. While in
1960 nine of the world's nineteen mega-cities were in developing countries, by 2008, 48 out of the 68
cities with a population of over 5 million were in developing countries!
Mega cities in essence are metropolitan areas with a total population in excess of 10 million people. The
population density is usually over 2000/Km 2. A mega city can be a single metropolitan area or it can be
two or more areas that have spread or converge on one another to form a sprawling megalopolis. There
are few Mega Cities (27) but their number is increasing.
World City:
A city that acts as a major centre within the global hierarchy, for serving multiple roles such as:
centres of national and international trade (for own country and neighbouring countries); finance;
business; politics as well as culture; science; information gathering, diffusion, publishing and the mass
media and all the associated service activities; they are seen to be centres of advanced professional
activity; they also act as centres of conspicuous consumption of luxury goods for the minority as well as
mass-produced goods for the multitude! These functions began to grow in importance in the 20 th century
as they diversified from traditional manufacturing (which became later outsourced), therefore they
went from strength to strength. A World city therefore, does not only serve a region or country, but it
serves the whole World. Most world cities owe their special status to their rich and diverse history
that has allowed them to develop with a strategic advantage over many other conurbations. This is not
just as a result of their size but due to their function!!! A Global Hierarchy (Power structure) exists in
the economics between cities, London, New York and Tokyo are all regarded as ‗World Cities‟
(John
Friedman proposed this 20 years ago) as they can be seen to be the „global financial articulations‘. Some
142
other cities can be classed as „multinational cities / articulations‟, such as Miami, Los Angeles,
Frankfurt, Amsterdam and Singapore.
Paris, Zurich, Madrid, Mexico City, São Paulo, Seoul and Sydney
are "important national articulations", all forming a "network".
Work of Hall
Peter Hall, a London University planning geography professor was the first person to recognised World
cities in a series of lectures in 1997, he still delivers these similar lectures today (more info at
www.megacities.nl). In these lectures he goes on to identify that the phenomenon of globalisation and
its impact on the urban system, coupled with what can be called ‗informationalisation‘ of the economy,
progressive shift of advanced economies (in developed countries) from goods production to information
handling, whereby a great proportion of the workforce no longer deal with material outputs. This
process is sometimes referred to as tertiarisation, where a greater proportion of the population are
employed in highly skilled service industry occupations as opposed to primary and secondary industries.
Peter Hall goes on to comment that this is the fundamental economic shift of the modern age which is
equally as important as the shirt from an agrarian (farm based) to an industrial economy in the late
eighteenth to nineteenth centuries.
He also explains globalisation of the world economy has led to a shift of manufacturing from traditional
manufacturing centres in the developed countries such as Manchester, UK and Detroit, USA to emerging
low wage economies in LEDC‘s such as China, India etc. i.e. China is now the factory of the World that
receives many richer nations ‗back-end‘ work. This process of globalisation has been aided by the
operation of Transnational Corporations which take advantage of global advertising and marketing in
order to sell global products.
Thus as production disperses to worldwide to LEDC‘s, services increasingly concentrate into relatively
few well established world trading ‗global cites‘ and a second rung of about cities immediately below
these which can be distinguished as „sub-global‟. These cities are centres of financial services, house
head quarters of major transnational corporations and production companies as well as being seat of
major world power governments. They attract specialised service industries such as law, accountancy,
public relations services, and legal services themselves increasingly globalised and related to controlling
headquarters locations. This clustering in turn attracts further business, tourism and real estate
functions. Business tourism allies with leisure tourism as movement of such ‗tourists‘ are both drawn
into the city in part due to the cultural reputation and historic reputation such upper rung cities hold
over less prestigious lower rung cities e.g. prestigious addresses for head offices are vital to the
reputation of successful transnational businesses. The influx of such activities also impacts on the
provision of transport, communication, personal services and entertainment/cultural sectors. There is
therefore, intense competition between such World Cities to draw in new investment and capital in order
to drive national economies and secure prosperity for nations (or even nations) and their citizens in light
of decline of traditional manufacturing industries which once underpinned the success of the Western
World!
A good example is London as a centre of World Finance, in recent years it has been the
prosperity of the foreign investment banks that has allowed the UK economy to remain prosperous when
pitted against global super powers. There are now more foreign banks in London (434) than domestic
banks, however, it‘s not always as simple as ‗take the money and run‘, but more ‗make hay while the sun
shines‘! As an economy heavily reliant on the banking sector like the UK‘s, can lead to economic ‗melt
down‘ during times of recession e.g. the ‗credit crunch‘ that was triggered by sub-prime lending in the
United States 2008 triggered recession and lack of trust in the financial markets in the UK due to the
increasingly globalised nature of the World Economy.
(i)World Cities are Resource Centres:
Cities grow because they are resource bases. Companies need access to knowledge in order to grow and
in locating in cities they can capitalise on temporary / semi-permanent sources of knowledge. When the
right set of factors occur within a city at the right time e.g. the correct level of knowledge resources is
available at the right time in history, then people can capitalise and benefit economically from the pools
143
of innovation and bursts of entrepreneurship that flourishes during such times. A good example would
be LA, California, where during the correct economic climate innovation leads to the development of
successful technologies that bring wealth to the city and to the nation as a whole e.g. the succ ess of
Apple Macintosh innovations like the iPod, iPhone etc. Which were all developed here and were later
outsourced to production centres in the developing world (as explained earlier).Two types of knowledge
exist, codified knowledge which is spread through the use of technology such as the internet and is
available anywhere and tactic knowledge which can only be ascertained through face to face contact, it
is the later type that still make cities centres of education and research (universities), which big
business feeds into and takes advantage of when developing new innovations.
(ii)World Cities are Learning Centres:
Companies learn through trial and error or to put it into business spread research and development (R
and D). If companies are allowed to learn they develop through cycles of growth and development. To
facilitate this they need to be part of a network of learning that consists of clusters of universities and
other educational establishments as well as being in close proximity to policy makers, decision makers
(governments) as well as other company research centres and so on... World cities can therefore be
viewed as ‗learning regions‘, ‗smart cities‘ or ‗creative hubs‘.
(iii)World Cities are places of Spatial Proximity:
Tactic knowledge is particularly likely to exist, develop and to be confined to areas of central business
districts (CBD‘s), university campuses, science parks. They are places were innovation is allowed to
spawn. Meeting and contacts in such places occur on a regular basis and providing sometimes by design
or sometimes by chance – the spark for new ideas. Thus innovation is more likely to develop in places
where there is a highly active and educated population that have many opportunities for interaction and
knowledge sharing. This type of high level advanced analytical ability is what makes the human race so
truly unique and you could argue it is what people do best!
Distribution of Millionaire Cities as proportional symbols
Millionaire City: A city with over a million inhabitants. There were 578 in 2008!
144
Distribution of Millionaire Cities 1975
Globalisation: a set of processes leading to the integration of economic, cultural, political and social
systems across geographical boundaries. It refers to the increasing economic integration of countries
especially in terms of trade and the movement of capital.
Informationalisation: The increasing importance of the information based sector of the economy which
relies on electronic data transfer. It is this process that has a profound effect on the geography of
megacities over recent years.
Summary
World Cities can be characterised in 3 ways:
They have shed a lot of their low value activities – manufacturing, distribution, assembly even
some lower skilled services like call centres to other cities or countries at a lower level of
economic development who can do this ‗back end work‘ more cheaply.
They have high level of synergy in their economic structures (2 or more companies, groups or
individuals working together towards mutual benefits that could not be achieved without this
close cooperation).
They offer a wide range of jobs, however they tend to encourage a polarised labour force. i.e. at
the top end there are limited specialists roles that require a high level of education and personal
skills/talent and very high rewards, whereas at the bottom end of the spectrum there are a
diverse set of semi-casual roles which require less skill, education and talent, these are low paid
work and offer fewer prospects! (– You decide which you want to fill and then you will realise why
you are really at college!) This can lead to an increasing spatial differentiation of types of
residential area within cities (i.e. why some areas are so much more affluent than others –
remember social segregation in Manchester last year – it was mainly down to socio-economic class
controlling where people live).
145
Eu rope‟ s Urban Hi erarc hy ?
Cities increasingly tend to compete and a district hierarchy exists, this is particularly evident in Europe
but is emerging elsewhere too (e.g. USA and Japan, but to a lesser extent).
In Europe the only unquestionable World City is London, with perhaps Paris qualifying based on its
importance as an information exchange, although it is a smaller mega city.
Below these two in the European hierarchy are the sub-world cities, which are usually the national
capital cities, as well as some cities that have adapted specialised functions which act as cultural or
commercial capitals. The later are smaller and their metropolitan areas have populations of around one
to four million. They are able to compete with World Cities with some effect but only in their specialised
functions, such as:
Brussels, Rome, Geneva  Government functions and political power linked to EU.
Frankfurt and Zurich or Amsterdam  banking.
Milan  Design
A key question for Europe is, with the development of the EU Single Market (signing of the Mastrict
Treaty), will the fate of these higher order cities be assisted to the detriment of the lower order
cities? This is an open question but provokes interesting debate; cities such as Brussels, Frankfurt and
Luxemburg may well to some extent become more influential than London and to some extent Paris which
might appear more peripheral.
The European Hierarchy of Cities (some attempt has been made to give an approximate order)
Key
1. London
2. Paris
3. Brussels
4. Amsterdam
5. Berlin
6. Lisbon
7. Milan
8. Madrid
9. Copenhagen
10. Geneva
11. Frankfurt
12. Rome
13. Stockholm
14. Helsinki
15. Warsaw
16. Vienna
17. Dublin
18. Bern
19. Zurich
20. Budapest
21. Oslo
22. Edinburgh
23. Birmingham
24. Manchester
25. Turin
26. Venice
27. Naples
28. Florence
29. Bologna
30. Prague
31. Valencia
32. Stuttgart
33. Marseille
34. Lyon
35. Seville
36. Hamburg
37. Leipzig
38. Hanover
146
Evidence of Growth of a European Megacity Region?
Eurocities form a tight inner circle forming what the European Commissions‟ (EC‟s) Europe 2000 report
calls the National Capitals Region (London, Paris and Amsterdam) – all within convenient radius for face
to face contact by air and increasingly by high speed rail links! So it appears that such cities will form
the European core of the European Urban System.
In turn they will be connected to the outer ring of key regional cities some 500-700km away that will
become connected by regular and frequent air services e.g. Copenhagen, Vienna, Zurich, Millan, Madrid,
Dublin and Edinburgh. These places will also be connected by high speed train to cities within their own
500km radii e.g. Milan with Turin, Venice and Bologna; Berlin with Hanover, Hamburg and Leipzig; Madrid
with Seville and Barcelona and this will join or connect points of contact between the European and
Regional urban systems.
Rather confusingly, with their typical population range of one to four million, these national capitals and
commercial capitals overlap in size with the major provincial capitals of the larger European nation
states: thus Manchester and Birmingham, Lyon and Marseille, Hannover and Stuttgart, Florence and
Naples, Seville and Valencia. These places typically serve as administrative and higher-level service
centres for mixed urban-rural regions, most though not all of them prosperous, and they have shown
considerable dynamism even while they too have lost traditional manufacturing and goods-handling
functions. These cities are all millionaire cities in their own right or can be classified as such when their
greater urban areas are taken into account.
Conclusion
There is evidence to suggest that there is an emerging Mega City Region (MCR) within Europe,
known as the ‗European Capitals Region‟ which has become increasingly connected through improved
communications links that allow face to face contact and sharing of knowledge (tactic knowledge).
This interconnected Mega City Region has developed as a result of globalisation of the world
economy and a shift to employment in service sector industries that relies on information processing.
It is argued for instance that the growing importance of these large mega city-regions in the
knowledge economy may have the counterproductive effect of encouraging territorial competition
between places rather than promoting the urgent task of economic sustainability!
147
Economic development and change related to Urbanisation
Urbanisation: Mexico City
Urbanisation: The growth in the proportion of a countries population that lives in urban as opposed to
rural areas. The term is sometimes used less accurately to describe the process of moving from the
rural to an urban area. Growth rates of cities is especially high in LEDC‘s.
Causes of Mexico City's Growth
Mexico City‘s growth is too rapid for accurate records as hundreds of new migrants flood into the
city each day. Officials state the population to be 16 million but the united nations state this could
be double!
1. Natural Growth (66% of total)
· Death rate 10/1000 (better health care, medicines)
· Birth rate 30/1000 (lack of contraception, Catholic)
2. Rural to Urban Migration (33% of total)
· 3000 rural migrants arrive per day
· adjacent rural states of Puebla & Hidalgo
· impoverished rural states of Guerrero, Oaxaca & Chiapas to
the south
a. Pull factors to Mexico City
· 80% of people in Mexico City have access to health care
· over 60% have access to piped water
· 75% have access to electricity
· wages 6 times higher than in rural areas
· people are attracted to the ‗bright lights of the city
as they perceive that they will have a better quality of life,
such as: better health care, schools, jobs, improved earnings,
housing, entertainment and cultural diversity e.g. music
the „Bright Lights of City‟
restaurants, hotels, professional jobs (TNC‘s).
148
b. Push factors from places like Puebla (think taxi driver)
· 80% of people in Puebla have no clean water
· 33% have no access to medical care
· 50% literacy rate
· Lack of work and gender divide where men work the land and women bring up children and look after
the livestock.
· The spiraling rural population has come about because of improvements in healthcare and living
standards lowering the Death Rate, but the Birth Rate has remained high due to tradition and religious
reasons preventing use of contraception. Therefore, the populations here are forced to farm more
marginal land (marginal clearances of wooded areas), which produces lower yields and leads to
environmental damage. So often there is simply not enough food or land to feed an ever increasing
population. There is also a lack of work in these regions so people feel that they can‘t survive here so
take the decision to move.
· Step migration results whereby people move from rural areas surrounding the city to smaller urban
centres like San Cristobal where they make a living from selling wares or doing menial jobs but this work
soon ‗dries up‘ so they are forced to move on, and into the city for greater employment opportunities.
Growth of Mexico City
· Growth rate of 5% per year since 1945
· 1900 - population of Mexico City was 370 000
· 1930 - it was 1 million
· 2000 - it was estimated at >20 million
Effects of Urbanisation in Mexico City
1. Housing
60% of population live in illegal spontaneous settlements e.g. Tlalpan, Valle Gomez & Azcapotzalco
in 1995 there was a shortage of 800,000 homes in Mexico City.
40% of existing homes are in a poor state of repair.
2. Waste Disposal
11,000 tonnes of rubbish are generated per day.
only 75% is collected – often uncollected rubbish blocks drains.
illegal dumping at Rincon Verde increases risk of diseases.
Penuco River receives 2,000 tonnes of untreated waste per day which leads to contamination of fruit
& veg, which contain twice the level of lead permitted in USA – which causes brain damage in infant
development.
3. Congestion /Pollution
3 million unregistered vehicles add 12,000 tonnes of pollutants per day.
toxic smog reduces visibility to about 2km.
breathing air is equivalent to smoking 60 cigarettes a day.
3,000 deaths annually from photochemical smog.
highest ozone concentration in world.
Sewage flows into open rivers and the water is then used for irrigation of crops!
Fruit and vegetables have twice the acceptable lead levels of the neighbouring US as leaded petrol
used in cars is used as a catalyst still. This has lead to 50% of newborns in Mexico City with lead levels
that may impair their development.
4. Poverty/Employment
50% of population engaged in low paid informal sector.
only 33% of population is economically active due competition for jobs.
149
Child labour often results as children are forced to work, this then has the knock on effect of meaning
that they miss out on education and therefore become evermore deeply trapped in a cycle of
deprivation.
large informal employment sector employs people doing menial jobs as there is no social security,
however this reduced the taxes paid to the government, therefore there is less resources to spend on
overcoming the many urban problems that these people cause just by living there.
the poor are paid on average $(US) 4 per day, 1/20 of that of the wage they could get in the US. Many
people work long hours and it is not uncommon for people to work 16h days, without the working
rights and regulations that many people take for granted in the Western World.
Although wages are low, living costs are high.
In some ‗illegal districts‘ where spontaneous settlements have been set up, access to education is
limited and school drop-out rates are high.
5. Health Care
Although better than in rural areas generally, 2/3 are still without it.
6. Physical Constraints
Mountains block the cities physical growth as seen by satellite, so there is simply not enough space left
for further growth leading to overcrowding.
Much of the basalt groundwater aquifer that supplies MC with its water is running dry due to over
extraction and loss of forestry in the catchment area.
Solutions
Hoy no Circulan or ‗no driving today scheme‘ was set up to reduce the congestion on the roads to limited
success as often the rich simply buy two cars.
Underground train / metro set up to reduce congestion has been part of the answer to reduce
congestion.
Unleaded buses introduced has successfully cut lead levels.
Sewage systems and water mains have been improved to after many corporate investors pulled out of
developments in the city centre due to these urban problems.
Areas such as Valle Gomez, officials have worked together with the local population to provide new
schools and a medical centre (since the earthquake 1985). Therefore this is the model for the future –
empower the people, not simply the west giving with open cheque books!
An environmental protection scheme has been set up in the drainage basin above MC where the
vegetation and soil is protected from further damage to prevent infiltration into the aquifer from
decreasing any further in order to minimize water shortages.
Why Migrate to the City? (Source the Urban Challenge)
Migration into the cities of LEDC‘s has in the past been predominantly a phenomenon of rural to urban
movement. In recent years movement from small urban areas to larger urban areas or inter urban
migration has been increasingly common. In Bangalore, India, 58% of migrants to the city have migrated
from other urban areas. Often this is a second stage migration after an initial migration from a rural
area to a local small town or city (Step Migration). The rate of increase in urban populations of small
urban areas has in the 1990‘s been far greater than that of the mega-cities.
Some of this movement into large urban areas is permanent and some temporary but the key motivating
factor is normally economic. The city is seen as a place of opportunity. In LEDC‘s most employment
growth, improvements in levels of disposable income, expansion of amenities and increases in personal
freedom have occurred in urban areas.
150
Knowledge of the opportunities afforded by cities has never been more openly available, even to the
most isolated rural communities. Globalisation of telecommunications has meant that knowledge of urban
lifestyles and urban opportunities is not only increasingly accessible but is also glamorised („bright lights
of city‟). The urban consumer culture is the dominant model in the world of advertising, television and
radio. These forces are examples of the pull factors of rural – urban migration: those considerations
which draw migrants towards the city. Two of the most often cited attractions of urban areas for rural
migrants are health and education.
The reality of urban in-migration is that it is an act usual influenced by a range of factors. The
potential attractions of urban living must be set against the difficulty of living in rural areas. These
difficulties of rural life mean many experience a life of poverty, also coupled with periodic natural
disasters. Rural communities often have very little economic and political power to influence and
participate in change. Push factors such as these are the variety of forces which prompt the
consideration of migration into urban areas. Figure 4 summarises a range of push and pull factors that
may influence urban to rural migration.
Main push and pull factors migrants experience in moving to the city.
Push Factors
Pull Factors
Rural poverty
Employment opportunities
Sub-division of land to uneconomic size
Higher incomes (6 x higher than in
Agricultural technology displacing rural
rural)
labourers
Better health care and education
Lack of public amenities e.g. education,
Access to urban culture and
health care etc.
freedoms
Drought and other natural disasters
Protection from civil and military
Religious, social and political
conflict
discrimination.
Promotion of urban consumer values
via the media
Unemployment and under-employment
Family conflict
Glamour and excitement
Family contacts
War and civil revolt
Government Policy (More state
Improving transport systems
spending in cities)
Government Policy
Attitudes to Urbanisation
Poor Residents (Valle Gomez)
Feel happy that they have higher paid jobs, better accommodation and health care and generally feel
they have more opportunities in the city. Some poorer residents if successful may be able to work their
way out of poverty and build a better quality of life for their families. Female poorer residents may feel
liberated from their traditional role to an extent as they are now able to work in the wage economy and
instead of rearing children and looking after cattle.
Unhappy because many feel their perceived expectations of city life never came to fruition and the
reality of life in the city is often worse than it was in the countryside due to overcrowding, disease,
rubbish, pollution problems especially those from transport, congestion, lack of community, miss family if
some still remain in countryside. Lack of jobs and poor wages mean their children are forced to work so
don‘t get the education they moved for.
Poor residents would be unhappy because of the low wages that they are paid; wages are only 1/20 of
the USA wage for the same job. They also feel they have no real choice about where to settle and build
their homes, even if it is illegal.
151
Rich residents
Are happy with improvements and business developments in the CBD such as 3 new hotels, new
underground system of trains making communication around the city easier and therefore, more
attractive to further investment. Happy with low cost of menial labour.
Unhappy with the fear of crime and reprisals from the poorer residents so protect themselves with
security measures such as gates and CCTV systems, which leads to further resentment from poorer
residents. More affluent residents are unhappy with the city‘s congestion problems but yet display a
selfish attitude as they choose to opt out of the one day a week non drive day by buying a second car.
This has led to anger among the poorer groups of residents who are too poor to be able to afford two
cars. Some may be unhappy with how the city officials and the government have allowed shanty towns to
develop in some areas of the city (even up to three blocks away from the CBD) many of which are illegal
and completely unplanned. Rich residents are also possibly better educated and are aware that if the
government continues to allow the process of urbanisation an increasing number of people will find work
in the informal sector meaning that the city will not benefit from the income tax that they are avoiding
paying on their earnings. The rich are very aware the net result would mean already overstretched
public services such as schools, hospitals and refuge collection will become overloaded further making
the problem worse!
Mexican Government
The Mexican government would be happy to continue to encourage development of Mexico City
especially the development of the CBD with new and innovative services such as finance/banking and
promote a positive image of the city on the world stage. They have been keen to encourage the
development of three new executive hotels.
However the Mexican government would want to discourage the development of illegal settlements to
which they have a negative attitude as they are damaging to the city‘s image and avoids taxes as well as
causing ecological damage.
The Mexican government would want to like to encourage the development of partnerships between
themselves and the developed world in terms of tax break initiatives and debt repayment allowances in
order to allow the Mexican people to work themselves out of poverty. The Mexican government would
try to encourage local people to help themselves with partnerships between local communities and
business or from foreign aid, a good example of this would be the new school and the new health centre
built in Valle Gomez..
The Mexican government would be unhappy about the large amount of people avoiding paying taxes by
working in the informal sector and hence repayment of national debt would be slow.
The Mexican government would be unhappy with the mushrooming of the population of Mexico city
out of control and would also be unhappy with the often unplanned adhoc illegal slum developments which
often avoid taxes and land rent but still place pressure on already overstretched services such as refuge
collection, sewage treatment, health care, education, transport and policing.
The Mexican government may worry that continued urbanisation and development is not sustainable
and may cause a continued increase in inequality between rich and poor. This is not only a moral problem
but one which is contributing to a two tier society (the haves and have-nots), and one which could even
trigger civil unrest.
Marginal clearances in the rural area could lead to environmental loss, loss of habitat and loss of
agricultural land which may result in food shortages. This is something the government would be worried
about and want to discourage.
152
Local Industry
Happy with low workforce overheads as the national wage is only on average 1/20 that of it US
neighbour. Could possibly be pleased as rapid urbanisation has allowed many industries to develop
without too many restrictions in terms of where to build. They would be pleased with the very limited
amount of green taxation and environmental regulations.
Unhappy because of pollution and unpleasant environment is bad for business image. Unhappy with
the inefficient transport network which may reduce the company‘s efficiency and productivity. May be
unhappy because more skilled positions and managerial positions needed to fill by foreign nationals since
the drop out rate from education in Mexico City is so high. Unhappy with the risk of being affected by
the high crime rate.
Rich residents
Are happy with improvements and business developments in the CBD such as 3 new hotels, new
underground system of trains making communication around the city easier and therefore, more
attractive to further investment. Happy with low cost of menial labour.
Unhappy with the fear of crime and reprisals from the poorer residents so protect themselves with
security measures such as gates and CCTV systems, which leads to further resentment from poorer
residents. More affluent residents are unhappy with the city‘s congestion problems but yet display a
selfish attitude as they choose to opt out of the one day a week non drive day by buying a second
car. This has led to anger in many of the residents that are too poor to be able to afford two cars.
Some may be unhappy with how the city officials and the government have allowed shanty towns to
develop in some areas of the city even up to three block away from the CBD, many of which are
illegal and completely unplanned. Rich residents are also possibly also better educated and are aware
that if the government continues to allow the process of urbanisation an increasing number of people
will find work in the informal sector meaning that the city will not benefit from the tax that they
are avoiding paying on their income, meaning already overstretched public services such as schools,
hospitals, refuge collection will become further overloaded. Making the problem worse.
Mexican Government
The Mexican government would want to encourage development of Mexico City especially the
development of the CBD with new and innovative services such as finance, banking and promote a positive
image of the city on the world stage with images of the three new executive hotels. However the
Mexican government would want to discourage the development of illegal settlements which is damaging
to the city‘s image and avoids taxes and ecological damage.
The Mexican government would want to encourage the development of partnerships between themselves
and the developed world in terms of tax break initiatives and debt repayment allowances in order to
allow the Mexican people to work themselves out of poverty. The Mexican government would try to
encourage local people to help themselves with partnership with either business of from foreign aid, a
good example of this would be the new school and the new health centre built in Valle Gomez.
The Mexican government would be unhappy about the large amount of people avoiding paying taxes
by working in the informal sector and hence repayment of national debt would be slow.
The Mexican government would be unhappy with the mushrooming of the population of Mexico city
out of control and would also be unhappy with the often unplanned ad-hoc illegal slum developments
which often avoid taxes and land rent but still place pressure on already overstretched services such
as refuge collection, sewage treatment, health care, education, transport and policing.
The Mexican government may worry that continued urbanisation and development is not sustainable
and may cause a continued increase in inequality between rich and poor. This is not only a moral
153
problem but one which is contributing to a two tier society (the haves and have-nots), and one which
could even trigger civil unrest.
Marginal clearances in the rural area could lead to environmental loss, loss of habitat and loss of
agricultural land which may result in food shortages.
154
Suburbanisation
1. What is Suburbanisation?
Suburbanisation is the process of growth of cities at their edges by the decentralisation of a population
resulting from the construction of new housing, shopping centres and industrial estates on greenfield
sites at the periphery of the city. It is the physical expansion of the urban area often termed urban
sprawl. Suburbanisation has been encouraged by the growth of public transport systems, and by the
increased use of the private car.
2. Causes of Suburbanisation
Suburbanisation is the construction of new housing, shopping centres and industrial estates on green
field sites on the periphery of the city. With every new housing estate and retail park the city grows
outwards into the surrounding countryside. The term urban sprawl is used to describe the spreading of
urban areas into rural areas.
„URBAN
SPRAWL‟
„ E dge C ity‟ –
Within limits of
the urban area
Suburbs
Inner City
City Centre
Industrial
Units/offices
Housing
Development
Retail Complex
The causes for suburbanisation (urban expansion at the city edge) include:
The desire for a better quality environment (less pollution, noise and congestion)
The belief that the acquisition of a suburban house is an outward sign of economic and social
success
More space for larger houses, gardens and expansion for retail parks
Land is often cheaper allowing low density housing to be built
The desire for proximity to open space and countryside
Changes in lifestyle associated with the family life cycle.
The belief that outer suburban or rural schools offer a better education
Perception that the outer suburbs offer a relatively crime-free environment
Jobs and services at nearby out of town retail and industrial parks
Improved public transport, road infrastructure and car ownership
―The process of movement outwards is a centifugal force. Cities grow by the processes of succession
and invasion of people moving from one city zone to the next.‖
155
Family Life Cycle
Stage in Life Cycle
Housing Needs/Aspirations
1. Pre-child stage
Relatively cheap accommodation, maybe flat or apartment.
The inner city is the most likely location for a home since
most cheaper rented housing is found in the inner city.
Renting of single family dwelling. Probably in the next zone
out from the inner city.
Ownership of relatively new suburban home. This would be
located in the outer zones of a city.
Same area as (3) or perhaps move to higher status area
Residential stability
2. Child-bearing stage
3. Child-rearing stage
4. Child-launching stage
5. Post-child stage
6. Late life stage
Maybe move back to cheaper areas in centre.
156
3. Characteristics of Urban Sprawl in Los Angeles, USA
Los Angeles is a classic example of a sprawling low density urban area.
Low density housing on cul-de-sacs, with gardens & garages in places like the San Fernando
Valley.
"Gated suburbs" with high levels of security
Developers build affordable homes costing $150,000 to attract new residents
Out of town retail & business parks, with footloose high technology industries
15 "edge cities", with populations over 500,000 surround LA. Each is a self-contained suburb
such as Mission Viejo, with housing, shops, offices etc.
Urban sprawl - LA city has 3.5 million people and is 30km across, whereas LA urban region
(including suburbs) has 15 million people and is over 115km across.
4. Impacts of Suburbanisation in Los Angeles
Suburbanisation is the process of Los Angeles spreading outwards as people move to the suburbs away
from the city centre. This movement to spacious, low density and less polluted suburbs significantly
improves their quality of life.
Eventually edge cities are created which are self-contained cities of 500,000 within Los Angeles urban
area with their own shopping centres, leisure facilities and employment. However, it also causes extreme
social segregation as poor and disadvantaged sections of society remain in the inner city.
As more and more people move out and resources are spread more widely to cover the ever-expanding
city less government money is put into the inner areas, which start to decline. Social tensions build up
due to resentment between the haves (whites) and have-nots (blacks), as well as the new migrants (e.g.
Mexicans) who move into the cheap housing now left vacant by people moving to suburbs.
157
Periodically trigger events build on these background factors to cause huge displays of social unrest
such as the Los Angeles riots of 1992.
The impacts of suburbansiation can be considered in terms of those that effect the inner city and those
that effect the suburbs as well as classifying those impacts based on social, environmental and economic
factors (see below):
Social Impacts
Inner City
Only the poor and disadvantaged
sections of society remain within the
inner city such as Watts (Mexicans) &
southern part of Long Beach (Koreans
& Cambodians).
Poor quality of life/environment
Social tensions (e.g. 1992 LA riots) &
resentment as poorer districts such as
Watts do not have the same level of
services as richer areas.
Encourages more migration to suburbs.
Suburbs
Extreme social segregation as more
wealthy groups move to low density
suburbs.
People who move significantly improve
their quality of life and social progress.
High security gated suburbs form to
keep out "undesirables".
Population growth in LA suburbs such as
Orange County grew by 1 million people
between 1970-90
Economic Impacts
Inner City
Inner city decline
Reduction in resources as decisionmakers have to provide new
infrastructure to service the new
suburbs.
Social services are funded by local
taxation, but inner city areas have high
numbers of people who do not pay much
tax, if any, (illegal immigrants,
unemployed, poorly paid) so inner city
declines as more money spent on
education & medical care in high
taxation areas like Bel Air & Malibu.
Suburbs
Creation of "edge cities" such as Mission
Viejo. Self-contained cities with their
own shopping centres, leisure facilities
and employment.
Increased resource consumption due to
building new roads, houses and increased
commuting distance.
158
Environmental Impacts
Inner City
Smog levels have increased with 1,130
tonnes of noxious gases emitted daily.
High levels of deprivation and low
taxes in areas like Watts are linked to
decline in the quality of the built
environment as well as higher levels of
vandalism and graffiti.
Suburbs
Loss of countryside and rural
habitats such as in Orange County
which has lost up to a third of its
agricultural land California as a
whole has lost a lot of agricultural
land as LA has grown and this is a
state that provides 10% of US
agricultural income.
Swallowing up of villages and small
towns at edges of city.
Vast sprawling suburbs (urban
sprawl)
Nearly 1,150km of freeways have
been built, many with up to 10
lanes, as LA has spread. This has
increased traffic levels to 8 million
vehicles in LA as more people
commute from the suburbs.
5. Attitudes to Suburbanisation
Orange County farmer
dislikes suburbanisation as farming land is being encroached upon and roads becoming busier
but likes it due to the money they can receive for selling their land to housing developers.
Commuter from San Fernando Valley
likes living in the more pleasant and rural suburbs but
dislikes the cost, congestion and time implications of commuting everyday into the city centre
from the suburbs.
Watts resident
dislikes the lack of investment in services and buildings in the city centre (less taxes collected in
city centre) as more money is spent on suburbs.
If the resident was for instance an immigrant, would possibly like better standard of housing and
services compared to the country of origin as well as liking the fact there are greater future job
prospects after learning the language.
Mission Viejo residents
like suburbanisation as they now live in a smaller city/town with more pleasant environment and
less commuting.
159
Dislikes the idea that more and more people are following them to these "edge cities" on the
suburbs.
LA
The City of Dreams (DVD Notes)
14.5 million people
130km (81 miles) across
The city is geographically constrained between the mountains and the ocean.
The city takes its water from N. California and Arizona which creates water problems.
The freeways have become choked with congestion as the city has grown.
The mountains trap the photochemical smog produced from the millions of motor vehicles that
use LA‘s highways.
LA has long been the centre of the entertainment business with the Hollywood film industry
attracting world acclaim.
However L.A. is a city of inequality with rich areas such as Beverly Hills being home to much of
Hollywood‘s rich and famous (haves – whites), where as inner city area such as Watts have
become areas of social unrest and high crime (have nots – blacks and mexicans).
Gun culture has grown in Watts and other poorer inner city areas over the last 20 year. The eye
witness account of the lady on the video suggested that gun shops and porn shops have replaced
video shops and suggested that muggings have become increasingly common, hence people are
buying guns for protection.
The L.A riots in 1992 resulted from this civil unrest and tension that resulted from such large
differences between the haves and have nots.
Jan 1994 L.A was hit by an earthquake –Loma Prieta
Causes of suburbanisation
Crime
Overcrowding
Pollution – smog especially summers
Fires
Plate Margin
More space/green leafy streets in suburbs  safe to bring up children
Lower crime in suburbs
Property Prices lower in suburbs
Lower commuting times
Lower stress
Better communications ,IT and teleworking
Good roads – large multi-lane freeways up to 10 lanes.
Migration – LA is a city made up of migrants:
85% of whom come from Mexico, either as illegal or legal immigrants. The remainder tend to
generally come from Vietnam, China, the Philippines (Korea i.e. the other side of the Pacific. In
the video the take-away owner came for a better education and to learn the language, he then
opened a small business to pay for his fees and with the profit decided to move to the edge of
the city. Edge Cities – such as the Warner Centre 50km from the CBD have developed as a
result of this trend. Also there has been a trend for offices to decentralise to the ‗edge city‘
location, this in turn encourages more people to suburbanise and edge cities have two functions,
residential and business. A problem that has occurred in LA‘s edge cities is there in nowhere for
people to go for recreational and many edge cities have lost the ‗human touch‘ of that the inner
city used to offer.
160
Individuals Featured
Lady in Music Industry
Making a 3000km move to Nashville from west LA – Perceived idea of lower crime, greenery, quiet
streets and the attraction of the music industry. This is actually counter-urbanisation. Gun shops and
pawn shops now occupy the retail units where video shops once were.
Homeless
Project Step - Basic manual work and sectarial work to pay for shelter. People move from mid-west
because of ‗bright light‘ opportunities. The reality does not live up to expectations and feels bright
lights of city were a big misconception. Andy Deemas featured in the film works in the post room of a
law firm.
Restaurant Owner
Has moved from Korea to get a college education and learn English. For one or two years initially but will
stay for longer more like 9-10. He is moving to Orange County on the cities southern edge.
Family who moved to edge city
Moved for larger home at cheaper price than in areas such as Malibu where they once lived. They now
have more peaceful environment with lower crime and lower threat from natural hazards such as
earthquakes and landslides compared to where they moved from in Malibu. Also the short 5 min
commute has allowed them to have a better standard of living compared to their previous urban
lifestyle.
Edge cities
Accommodate the demand for more office space
Improvements in telecommunications mean offices don‘t need to be in exclusively in CBD
Retail units sprung up where good communications links where and the edge cities were born as
offices moved to these areas
Warner Centre Woodland Hills Complex – 50km from the centre
Nowhere for people to go as people don‘t go to CBD.
Universal City Walk is a success. Restaurants, shops, recreational spaces (―the human touch‖)
and even a small university.
LA Riots – A push from the inner city
Sparked by the "not guilty" verdict of the LAPD officers charged with the beating of motorist Rodney
King in March 1991, the 1992 riots claimed at least 53 lives, injured 2,300 people and damaged more
than 1,100 buildings.
For three days, Los Angeles was the scene of rioting, looting and even
killing. Hundreds of businesses were destroyed, their owners left
scarred with nothing more than memories. The city looked like a war
zone.
Unlike the 1965 Watts riots, the 1992 riots were dispersed over
different cities, left scars in areas as far as Long Beach and
Hollywood, leaving many to attempt to heal the scars and rebuild the
city.
161
The Watts riot began during the 11th of August and required a
curfew area to help reduce the violence, finally ended on the 17th of
August. By the end of the riot there would be 34 dead and 1,032
injured.
There were a number of negative social conditions that led up to the
1992 Los Angeles riot. Problems associated with poverty,
discrimination and police brutality had persisted for decades. New
problems arising in the 1980s, including the mass influx of foreign
immigrants, the introduction of crack cocaine, massive job losses and
reduction of governmental financial aid, all added to the strain in the
lives of inner-city residents ("Understanding the Riots," 1992; Ong &
Hee, 1993).
Cultural factors such as the swift rise in crime associated with the
introduction of crack cocaine and the resultant increase in gang
violence, on top of sharp cutbacks in social support programs, were
new forces that altered the cultural dynamics of Los Angeles in the
1980s ("Understanding the Riots," 1992; Ong & Hee, 1993). The
authoritarian personality paradigm would suggest that these factors would produce a home life that
would cause people to be aggressive on a daily basis as well as an environment ripe for the inculcation of
prejudice. The social learning paradigm would assert that the factors listed above produced a culture
that became increasingly aggressive; this in turn caused L.A. residents to increasingly choose aggressive
means to achieve desired goals, and to integrate aggression into recreation and pastime activities. Also,
because they were witnessing and experiencing increased aggression on the part of the police through
programs like Operation Hammer, residents learned that aggression was not only acceptable to engage
in, but was necessary as a means to survival.
Institutional factors like the use of racist rhetoric by political officials, discrimination in governmental
institutions, abuse of residents by the police department, as well as deteriorating economic conditions
and dismantled government support, were all problems that persisted from the period prior to the 1965
Watts riot ("Understanding the Riots," 1992). The frustration-aggression paradigm would assert that
each of the previously mentioned hardships increased the level of frustration experienced by the
residents of L.A. Furthermore, as black residents saw new immigrants able to start up new businesses
relatively rapidly, and charge what many thought were unfair prices, feelings of being deprived were
awakened among many black residents. An increase in racial tension between immigrant business owners
and black residents resulted. The effects of this tension could be seen in the fact that almost all of the
businesses attacked during the 1992 riot were owned almost exclusively by whites or immigrants
("Understanding the Riots," 1992).
Conflict between racial groups is often a factor in areas
where large demographic changes take place in a relatively
short period of time. For example, the influx of immigrants
into the L.A. area diversified much of the poor areas of L.A.
and displaced whites as the majority population represented
in these areas. The Latino population growth rate between
1964 and 1992 exceeded 1,000 percent. Friction generated
by this kind of growth in communities is rarely solved in
short time periods. But through the institution of
community activities designed to increase social awareness
and cohesion of groups in a given community, as well as
culturally-oriented training programs in schools, ethnic
tensions can begin to be addressed.
162
Fact or Opinion?
163
Counter Urbanisation
1. What is Counter-Urbanisation?
Counter-urbanisation is the process of population movement out of major urban areas into much smaller
urban areas and rural areas. People are said to "upsticks" and move away from large cities, physically
crossing the city boundary and going beyond the greenbelt. The increased use of the private car, and
electronic technologies, have enabled this to increase in MEDC's.
2. Causes of Counter-Urbanisation in Paris
There is no single, simple reason why people are leaving large cities and moving to smaller towns and
villages. A combination of factors apply:
Improvements in rail and road transport (eg) Marne-la-Vallée which has 6 rapid express stations,
is linked to Paris by A4 motorway and had a TGV station opened in 1994.
Improvements in information communication technology (e.g. faxes, e-mail and video-conferencing
facilities).
Growth of retirement migration to quiet rural areas.
People's perceptions of the differences in quality of life between cities and rural areas.
Changing residential preferences amongst middle-aged couples with families.
Relocation of offices and footloose high-tech companies to small rural towns (e.g.) La Cité
Déscartes high-tech scientific centre in Marne-la-Vallée.
Government policies designed to encourage more people to live in rural areas (e.g.) the Master Plan
for Paris set up in 1976 which was designed to develop growth along corridors and new towns such
as Marne-la-Vallée.
3. Characteristics of Counter-Urbanisation
increase in population of rural areas
increased use of commuter railway stations in small towns & village
increased value of houses in small towns & village
increased construction of "executive housing"
conversion of farm buildings into houses
infrequent bus service (many households with one or two cars)
164
need for more low cost housing for young people as ―newcomers‖ drive up house prices Many
traditional local shops and services change or close down in order to fill the demand imposed by
the newcomers. E.g. The local post office might shut down, the local pub might change into a
wine bar offering high price services only affordable to the new affluent people. Local corner
shops might change into delicatessens offering expensive food goods. The list of changes goes
on like this etc.....
4. Effects of Counter-Urbanisation in the Paris Region
Counter-urbanisation has a big impact on both the large cities losing population (eg. Paris) and the small
towns and villages in the countryside gaining population (eg. Marne-la-Vallée).
Impacts on small villages and towns in the coutryside that are gaining population
Economic
Newcomers inflate property prices which locals cannot easily afford.
Newcomers are relatively affluent and lead car-based lifestyles, they tend not to use local
shops, petrol stations so local services are forced to close. This is especially prevalent in
“dormitory” or suburbanised villages where a large proportion of the population commute to work
leaving a small daytime population.
Public transport becomes under used and less viable resulting in a decline in services.
Social
Resentment builds up between the newcomers who want to live in the countryside and the local
residents who work on the land.
Often newcomers form action groups to stop country practices like fox hunting, as well as
opposing any plans for development in the area that might spoil the peace and quiet they moved
there for.
Bored teenagers, not used to rural life, can cause a nuisance in small villages and towns.
Once tight-knit communities begin to lose community spirit as more and more people move in.
Some local services improve such as TV cables, high speed internet, gas mains are fitted instead
of relying on oil tanks. However some may suffer e.g. local schools as counter-urbanisers may
send their children to the schools where they moved from
Traditional Villages become dormitory towns or suburbanised villages which is damaging to local
services.
A suburbanised village is a settlement made up largely of daily commuters who are employed elsewhere
in a larger centre. These commuters have displaced the original residents or live in new housing at the
edge of the town or village. Suburbanised villages are characterised by a lack of retail outlets since the
commuters will use services in the centre of the city or in out of town shopping centres. Suburbanisers
use the city as a place of work and use the city to relax in and use its recreational facilities and their
home in the countryside is just a place where they return on an evening. Remember the Mars Bar!
Impacts on large cities losing population
Counter-urbanisation does not only have an effect of impact on the place that the newcomers move to
but also, this process has an impact on the place that they moved from either inner city / suburb.
Economic
Inner city decline due to reduced investment.
165
Unemployment as companies move out for more space and better environments.
Social
Decline of population from inner city areas.
Ethnic and wealth segregation.
5. Attitudes to Counter-Urbanisation
Local residents - dislike the newcomers, see them as "weekenders", as they do not contribute to
the stability of village life when they are at work in the city during the week. They have different
social norms (wine drinking, barbecues & fast cars). Too "posh" for the locals. They increase the
price of housing so that local people cannot afford local homes. Newcomers do not use public
services, which cause their decline. Many locals will also feel unhappy that the new comers with
their high pace car based lifestyle have impacted on the rural in a way that it takes on some of
the characteristics of the city from which the newcomers came from. E.g. noisy traffic congested
roads, loss of community atmosphere where people rarely talk face to face i.e. i.e. these
newcomers make the rural more like a suburbanised village (see above).
Newcomers - like counter-urbanisation because they now live in a delightful place, which is quiet,
has no hassle and is unpolluted. Pleasant countryside walks. Will encourage friends to move as
well.
Local shopkeeper - like due to potential for increased custom, but will need to modify produce
sold (videos, alcohol, frozen food).
Property developers/local builders - like due to increased market for new houses & conversions.
“ Rura l Reveng e of t he Whit e Set tl ers‟ is
S weet ”
One of the most dramatic trends now affecting the British population is the drift of people ba ck to the
countryside. More and more of us are moving to rural areas.
I suspect this comes as a surprise to most people- so often and so insistently have we been told about
the plight of rural shops, buses and schools, caused by depopulation. Actually there is nothing new about
rising rural populations. The proportion of the population living in rural areas has been increasing since
the late 1950‘, when it bottomed out at about 18%. It now stands at about one person in 4. Even the
percentage living in remoter rural districts is rising: from 9% in the 1961 census to 11% in 1991.
The trouble, of course, is that these new rural residents are not the same as the old ones. Instead of
leaning on ploughs to discuss the habit of rooks, they zoom about in BMW‘s and organise action groups to
oppose employment providing factories. Or, as an academic puts it in a newly published book (the rural
economy): ―hence the market mechanism leads to middle-class, prosperous, mobile and ‗countrysideconscious‘ incomers to the countryside, taking over and protecting ‗positional goods‘ (houses, estates and
rural quality of life) whose values are absolutely dependant on limited numbers of people having access
to them‖.
In Northumberland, where the invasion of the recently urban middle class is still in its relative infancy,
they are known as the ―white settlers‖. But however much the older residents may resent them because
of their effect of such things as the viability of the local bus service of the acceptability of muckspreading or field sports, there is no stopping them. Short of xenophobic immigration policies operated
by parish councils, nothing can be done to prevent the conquest of the countryside. By the ex-urban
bourgeoisie. Because of car and the urban crime rate, it was inevitable before the Fax and the internet
came along. Now it is a stampede.
166
For centuries the boot has been on the other foot.
Towns and cities have been swamped with
immigrants from the countryside. As late as the eighteenth century, London‘s birth-rate was so much
lower than its death rate that it depended on rural immigrants just to sustain its numbers. In 1841 the
life expectancy of Londoners was five years less of that of other Britons: 35 instead of 40. Cities were
sinks into which the countryside sent its surplus people to die.
The reason people died younger in towns than in the countryside was not just that there were more
diseases to catch. It was also that the better-off stayed at home: those who left for the city has
already run out of options for earning a living: their elder brothers inherited the farm or forge. The
industrial revolution, far from creating urban misery, did almost the opposite: its factories found a way
to keep alive (albeit miserably) the surplus people the countryside did not support – long enough for
Dickens and Shaftsbury to notice them.
So the middle-aged, middle-class, middle-management consultant who now pays over the odds for a
house in rural Wiltshire, thereby depriving the local fencing contractors son of one last chance to
afford his own home locally is in a sense only getting his own back. It was the fencing contractor‘s
great, great-grandfather, a small farmer, who sacked the consultants, great, great-grandfather, the
hungry casual labourer for stealing a pig, and drove him to tramp the long road to London, where he was
lucky enough to survive and eventually prosper. Revenge is sweet.
It is, as David Harvey argues in a chapter of the rural economy, a battle between those who wish to live
off and those who wish to live in the countryside: between those who hope their sons can get jobs if the
sawmill down the road gets planning permission to expand and those who oppose the planning application
because the noise of the saws makes it harder for them to think as they compose advertising copy in
their study.
The white settlers will win. But it is wrong to think of the countryside‘s inhabitants as separate tribes.
They are more like separate age-groups. Drive through the middle of any small country market town on
a Saturday night and look at the groups of bored teenagers. They would give their hind teeth to be in
Piccadilly Circus instead. Most city centres seem increasingly to serve one overriding aim: the provision
of the means of youthful mate selection: restaurants, universities, and clubs.
In the future when we all flit between portfolios of flexible jobs if the planners let us, we will each have
an urban phase in our youth followed by a rural retreat in middle age. From our rural home we will broker
our talents among different employers by electronic mail. If we should glance out of the window and see
Farmer Brown surreptitiously drenching the field with pesticide or galloping after a fox, the Parish
Council Privatised Swat Team will only be a telephone call away – the end.
Inner City Decline and Regeneration
Inner City Decline – Salford Manchester
The City of Salford is 37 square miles in area and is located
on the western side of the Greater Manchester conurbation.
It is home to approximately 220,000 people.
It is at the hub of the region's motorway and rail network.
The Salford lies on the northern bank of the Manchester
Ship Canal and is bisected by the Bridgewater Canal, which
have both been important factors in the City's and the
region's growth.
Salford is a City of contrasts. Its thriving business districts of Central Salford, Salford Quays lie
167
within an extensive inner city area, including Ordsall, Weaste, Eccles, Pendleton, Broughton and
Blackfriars, which has some of the worst characteristics of social deprivation within the region. Beyond
this inner area, suburbs vary from the picturesque village of Worsley on the Bridgewater Canal to the
more urban areas of Swinton and Walkden and the outer areas of Irlam and Cadishead. In turn, these
suburbs give way to large tracts of open countryside, much of which is prime agricultural land created
from the former moss lands of Chat Moss, Linnyshaw Moss and Clifton Moss.
Present-day Salford has its origin in the early years of the Industrial Revolution when the construction
of the Bridgewater Canal, the Liverpool and Manchester Railway and, more recently, the Manchester
Ship Canal, provided a lengthy period of economic and physical growth stretching to the Second World
War and beyond. This period saw the establishment of a thriving economy within the City based upon the
docks, heavy engineering, chemicals, coal mining and textiles. With this booming economy came a
growth in population and the spread of artisans' and employers' dwellings in old Salford and in the
outlying towns. In the past 30 years, however, the City's fortunes have changed dramatically. Whilst it
is true that the Victorian and Edwardian period created considerable wealth for the City is has also left
Salford with a legacy of problems which it has only recently begun to come to terms with.
1. Causes of Inner City Decline in Salford
Between 1965 and 1991 the Salford lost over 49,000 jobs, or more than 32% of its employment
base. Several factors contributed to this decline, not least competition from NIC's, the introduction of
new technology, lack of space for expansion, congested road network, not designed to accommodate
large lorries and the concentration of investment in London and South-East. The biggest job losses were
experienced in the City's traditional manufacturing industries and although the service sector expanded
during this period, it was unable to compensate for the decline in manufacturing employment. As
factories closed down there were fewer customers for Salford's shops, which often also closed as trade
declined. The people who remained in inner city Salford, as others left due to suburbanisation, had lower
purchasing power, so shops struggled to survive. Unused and obsolete factory and shop buildings often
declined into a poor state of repair. Derelict, unattractive surrounding environment meant that new
firms often avoided the inner cities.
The Salford's total population has declined by approximately 60,000 (just over 20%), between
1971 and 1991. Factory job losses forced people to leave inner city Salford and improvements in
transport, as well as a boom in new house building on cheap edge-of-city land encouraged movement to
Hale, Altrincham, Worsley and other suburban locations. The loss has been most acute in the inner city
areas where slum clearance and outward migration have resulted in the departure of many of the more
affluent, more mobile and younger members of the community leaving behind an ageing and often low
skilled population.
The decline of the City's economic base, coupled with extensive clearance, has created significant
environmental problems, particularly within the inner city. The last 30 years or so have also seen a
considerable change in the overall appearance of Salford, as the slum clearance programme of the
1950's and 1960's removed row after row of terraced housing to make way for new, predominantly
Council-owned, housing developments. Today some 34% of the City's housing stock (approximately
33,700 dwellings) is Council-owned (significantly more than the national average) and about half of these
are tower blocks or maisonettes. Slum clearance and redevelopment, however, have not solved all of the
City's housing problems. Some 19% of the current housing stock (18,000 dwellings) was built before
the First World War, and the majority require improvement and repair. These areas of 19th and
early 20th century buildings are obviously old and often consist of boarded-up shops, poor quality houses
and derelict factories. The terraced housing is often privately rented and frequently overcrowded. In
addition, many newer Council properties built in the 1960s and 1970s, particularly the flats and
maisonettes, have not lived up to expectations. Tenant dissatisfaction is reflected by the high rate of
void properties and a deteriorating housing stock. These developments were built quickly and with
168
untested technology and materials. Consequently the construction was of poor quality and the buildings
have deteriorated with time.
Some flats suffer from condensation and dampness. They lack soundproofing and are poorly insulated
and expensive to heat. Residents on the upper floors have to rely on lifts for access, particularly parents
with prams, pushchairs, and the elderly. Frequently the lifts would break down or are vandalised. Many
residents are afraid of being mugged and the elderly are often trapped in their flats. There are few
play areas for young children, and no community facilities for young people or for the elderly. No sense
of community. People do not walk past each other‘s front door as they would walking down a street. As
new people moved into the inner city area they took less care with their environment.
These problems are compounded by social deprivation which stems from long-term unemployment, poor
housing conditions, poverty and ill-health. As the more well off people moved out of inner city Salford to
the suburbs (suburbanisation) the inner city housing became less popular and increased the number of
empty properties, causing prices to drop. Families with little choice (less well off, unemployed, long-term
ill, single parent families or ethnic minorities) moved into Salford's cheaper inner city accommodation.
Less resources, more social problems, less stable communities and less commitment to the area all cause
anti-social behaviour, vandalism and crime to increase. This growing stigma caused the reputation of the
area to decline and the growing exodus of more wealthy families leads to a downward spiral of decline.
Inner city deprivation contrasts markedly with conditions in the more affluent parts of the City like
Worsley and Boothstown, where housing is among the best in the region and where high quality open
space is largely protected by Green Belt.
2. Characteristics of Inner City Decline in Salford
Economic Decline - in Salford there has been a high rate of factory and office closures, limited and
decreasing job opportunities, very little private and public investment, and few new industries (e.g.)
Between 1965 and 1991 the Salford lost over 49,000 jobs, or more than 32% of its employment
base (heavy engineering, chemical, coal mining & textile industries).
Population Decline - people of working age and with job skills have moved out of inner city Salford in
search of work and better living standards (e.g.) Salford's total population has declined by
approximately 60,000 (just over 20%), between 1971 and 1991.
Poor Environmental Conditions - inner city Salford has areas of derelict land, vandalised and empty
buildings, litter, graffiti and poor quality housing. (e.g.) Some 19% of the current housing stock
(18,000 dwellings) in Salford was built before the First World War, and the majority require
improvement and repair.
Poverty and Deprivation - inner city Salford includes higher than average percentages of homeless
people, elderly, poor people, single-parent families, unskilled workers and ethnic minorities. (e.g.) Over
39% of the population of Salford do not own a car (the national average is 13%) and over 35%
have no qualifications (national average 29%).
Statistics for Ordsall Salford, Indicating Evidence of ICD.
Unemployment
No qualifications
Council housing
No car
Average house price
Crime rate/1000 people
Households with long term illness
Ordsall
Salford
England & Wales
4.5%
38%
41.5%
50.7%
£73,000
3.8%
35.5%
25.7%
39.2%
165,000
14.1%
17%
3.4%
29.1%
13.2%
26.8%
£224,064
11.4%
16%
39.25%
169
Key statistics for Ordsall
Population
Area (hectare)
Urban population density
City rank
6,554
413.71 ha
15.84 per hectare
20th
There are a proportionally more people aged 20-34
Between 1991 and 2001 the population declined by approximately 1100 people
Almost 70% of the population are Christian
88% of the population are White British
22% of the population are married compared to an average of 43%
42% of the population have never married
35% of households are single person households below pensionable age
14% of households are single pensioner households which is lower than average
4th most deprived ward in Salford.
Improving health
Health deprivation is very severe throughout the ward
15% of the population aged 16-64 have a limiting long term illness. This is 4% higher than
average.
Reducing crime
The rate of burglary is lower than average and has dropped over the last 3 years
Theft from motors is higher than average
Theft of a motor is higher than average and rising.
Encouraging Learning, leisure and creativity
Pupil attainment at key stage 2 is below average in English, maths and science
Pupil attainment at GCSE 5A*-C was 19% in 2005 compared to an average of 45.6%
The four primary schools in the ward are all part of an Education Action Zone, a national
initiative to support schools working in challenging circumstances
Above average childcare provision
37% of people aged between 16 and 74 have no qualifications. This is above average
24% of people aged between 16 and 74 are qualified up to Level 4/5. This is 10% higher than
average.
Investing in young people
There are 374 lone parent households
95% are female lone parents
10% of female lone parents are in full time employment and 22% are in part time employment.
Promoting inclusion
The number of households claiming housing or council tax benefits has been falling since January
2004 and despite a slight rise is now 5% above average
170
Average household income has risen by over 15k per annum since 2002. Household income has
risen faster than average.
Creating prosperity
Unemployment is running at 9.4% which is 5% above average
43% of the employed population are in higher level and professional occupations. This is higher
than average
6% of the working population are skilled trades people.
Enhancing life
8.2% of properties are vacant representing a rise of 0.8% since January 2004
50% of properties are purpose built flats
Over 75% of households live in rented accommodation
44% of households rent off the local authority which is higher than the average of 26%
House prices have risen by approximately 72% since 2002. Prices have risen more quickly than
the average
Numbers of dwellings have risen by 1651 net in the last 10 years
The number of properties changing occupier rose sharply during 2005 from 2.4% in quarter 1 to
13.3% in quarter 4
42% of people in employment travel to work by car or van which is 10% lower than average
20% of people in employment walk to work almost twice the average
52% of households do not own a car, over 10% higher than average
9% of all people in employment use the tram to travel to work.
Policies and processes of Inner City Regeneration in Manchester
a. Property-led regeneration policies
Property-led regeneration is the improvement of an area by means of the construction of new buildings
or the redevelopment of existing buildings (often involving a change of use). Examples of this include:
the use of former warehouses for new upmarket housing in dockland/canalside areas
the conversion of multi-occupancy blocks into more acceptable housing.
The emphasis is on the changing image, and thereby improving confidence for further investment.
171
In the 1980's Urban Development Corporations were set up in 11 urban areas, including Manchester.
The corporations were agencies appointed by the government. They had the responsibility of planning
and development in their area and their main task was to attract new firms. They did this by preparing
sites for new businesses, marketing the area, investing in new infrastructure and improving the
environment.
Examples include either the Trafford Park Development Corporation or London Docklands.
b. Public-Private Partnership policies
These policies are the co-operation between various tiers of government (central and local) and other
bodies (public and private businesses) with the overall aim of achieving economic and social development
in an area. The foci are in terms of job creation, land reclamation, and environmental improvement. Areas
with high unemployment or with high concentrations of vacant, derelict and contaminated land are the
main targets.
Examples include the Salford Quays Development or Hulme, Manchester.
c. Housing Association schemes
Housing Associations are non-profit making organisations set up to provide rented accommodation.
Initially they were the third type of housing provider after the private sector and local authorities, but
during the last 20 years their influence has increased. They use a system whereby private capital is
borrowed either to build new houses, or to buy existing housing stock (e.g. former council housing), and
they seek to make returns on their investment, for further reinvestment. As they also receive
government subsidy, they are able to provide housing for many people at lower rents.
They are also part of a scheme to encourage greater home ownership. Some housing associations in the
inner city areas are using this system for shared ownership to initiate the process of home ownership in
areas where this is not the norm. In some cases, housing associations may offer rental packages on
furniture and other household items.
Examples include the Northwards Housing Manchester and Stockbridge Village Trust, Liverpool.
d. Gentrification
Gentrification is a process of housing improvement which encourages affluent groups, such as
professionals and managers, to move back into poor inner city neighbourhoods. This is a process of reurbanisation because people are choosing to live in the inner city rather than in the suburbs or commuter
villages. It is also a process by which the regeneration of inner cities takes place, but it is different
from the schemes that have taken place at Trafford Park and Hulme etc. Gentrification is carried out
by individuals or groups of individuals, and not by supported bodies.
Gentrification involves the rehabilitation of old houses, warehouses and industrial buildings on an
individual basis, but is openly encouraged by other groups such as estate agents, building societies and
the local council.
Gentrification occurs for a variety of social, economic and environmental reasons. All of the following
factors may apply:
The Rent Gap - this is the name given to a situation in which the price of land or property has fallen
below its real value. (i.e. there is a "gap" between its actual price and its potential price). This can
happen where a neighbourhood has been allowed to deteriorate and decline due to lack of maintenance
172
and investment. The housing can only fetch a low price but because of its size, design or character it
may have the potential to attract a high price if money is spent on renovation.
For example, a run-down house could be bought for £30,000 but sold for £90,000 following a £20,000
renovation. This represents a huge profit on each house which would be very attractive to builders,
property developers or individual householders.
The Rent Gap concept sees capital or profit as the main motivation behind gentrification.
New Types of Household - most industrialised countries are seeing a growth in the proportion of their
populations living in single or two person households without children. Such households have different
housing needs and are more likely to see advantages in living close to the city centre. Two words
emerged in the 1980's to describe these groups. "Yuppies" (Young Upwordly Mobile Professionals), who
were seeing a steady rise in their incomes in the 1980's, and "Dinkies" (Dual Income and No Kids) who
are couples where both partners are in full-time jobs but they do not have the costs of raising children.
Families with children are more likely to be concerned about the attractions of the suburbs such as
larger gardens, more attractive parks, a more peaceful environment, front drives and garages, and
higher achieving schools.
High Commuting Costs - commuting from the outer suburbs or commuter villages can be stressful, time
consuming and costly, particularly in larger cities. For people who work in the city centre the prospect of
cutting these costs by living in the inner city can be attractive.
The "Pioneer" or "Frontier" Image - this is an explanation for gentrification based on the idea that
some people enjoy the "challenge" of moving into a deprived inner city area which to outsiders may
appear threatening or even dangerous. It is claimed that gentrifiers see themselves as "pioneers"
helping to "tame" and "civilise" a run-down "wilderness" dominated by "hostile natives". It is an
explanation of gentrification which is probably of more importance in North American cities than in
European cities, but it almost certainly plays a part in all cities. Artists, designers, people with radical
political views and other such groups are often the people involved. By converting inner city buildings
such as warehouses, workshops and large houses into unconventional but carefully designed dwellings
these groups can emphasise their difference from mainstream society.
Government or Local Authority Action - in some cases local decision-makers deliberately plan to
gentrify an area as a way of regenerating a run-down neighbourhood. In the London Docklands the
building of very expensive housing was encouraged. The aim was to encourage investment by building
firms to bring high earners into the area to help boost the local economy .
Only a small number of inner city neighbourhoods may be affected but nevertheless the impacts can be
very significant. Gentrification can completely change the character of inner city neighbourhoods. Once
higher earners start to move into an inner city area neighbourhood property prices rise. This in turn
prevents poorer families from moving into an area and may encourage existing poorer families to move
out. It also encourages private landlords to sell their housing to the highest bidder and this cuts the
availability of cheap rented housing. The process not only affects the housing but the shops and
services. The process has both positive and negative impacts.
Gentrifiers seen as a threat to the traditional inner city communities. There may be conflict
between the existing local people and the incomers.
Loss of business for local traditional low order shops (e.g. greengrocers).
An increase in high order specialist shops and restaurants.
Maintenance and refurbishment of old housing.
Higher house prices and rents reduce the supply of housing for low income households.
173
Regeneration of inner city districts and increased investment in property improves the
appearance of the local environment.
Increased local tax income for local authority from increased numbers of newcomers.
Higher car ownership increases congestion on local streets particularly as a result of lack of
parking space.
Opportunities for local businesses as a result of increased wealth in the district.
Refurbishment creates employment, such as design, building work, furnishings and decoration in
the area.
Examples include the Northern Quarter, Manchester.
Re-Urbanisation Manchester:
Re-urbanisation is a process that began taking place in Manchester as early as the late 1980‘s but was
rapidly encouraged after the IRA bomb that kick started the regeneration and therefore improved the
urban environment post 1996.
It is a process whereby people move back into the city centre.
Gentrification is one process that has encouraged this in Manchester but other processes have
operated. Many new luxury flats and apartments have been built e.g. St Georges Island, Castlefield, as
well as many new small starter homes aimed at young families (DINKIES) e.g. those built in redeveloped
Hulme. In the 1980‘s Manchester had less than 1000 people living in its core, but all this has changed
and now it has around 25,000 people choosing to reside in its inner city core (2010 figures). They live in
places such as Castlefield, The Northern Quarter, New Islington, the Green Quarter, and Salford
Quays.
2.6 million People now live within Manchester‘s actual boundaries, over 7 million others live in the wider
region, making it second only to London in terms of population size in Great Britain. For 11 million people
living within 50 miles of the City of Manchester, it is the place where they come to work, or to shop or
to visit the many attractions and entertainments. In any one 24 hour period the city has 355,000 users.
Since the IRA bomb exploded outside Manchester‘s Arndale Shopping Centre in 1996, the city has seen
much investment and changes. In fact it was the bomb in 1996 that kick started the regeneration which
has changed the face of Manchester. Manchester is now an area that represents the blue print for redevelopment in other cities and is at the forefront of developing sustainable inner city communities.
The Future: Sustainable Communities New Islington Manchester
174
Manchester‘s notorious Cardroom Estate (re-named New Islington) has been redesigned as a new
Millennium Village, to act as an example of a well-designed sustainable community.
The current Cardroom estate was built in 1978.
The estate‘s
community was suffering the effects of massive depopulation, poor
services and high levels of crime. Over one third of the houses are
now vacant.
The vision is to provide beautiful canal side walks, cafés, cuttingedge architecture, moorings for narrow boats, gardens, shops, trees.
The Strategic Plan Framework envisages a total of 1100 dwellings,
either to rent or to buy. A minimum of 100 will be affordable. There
will also be a school, shops, a health centre, crèches, workshops and
a gymnasium. Demolition started in February 2003 and work is expected to be completed by December
2010 at a cost of £180m. In 2005 Urban Splash pledged to provide homes as ―cheap as chips‖ at New
Islington. The developer has signed a deal with Manchester Methodist Housing Group (MMHG) and the
Housing Corporation is paving the way for homes as little as £60,000.
An alliance of partners are working with English Partnerships on the
New Islington Millennium Community, including the local community, Urban
Splash, Urban Regeneration Company New East Manchester Ltd, Manchester
City Council and Manchester Methodist Housing Association, part of Great
Places Housing Group.
Islington Square is the first scheme to be completed in New Islington.
Designed for Manchester‘s Methodist Housing Association (MMHA) by
architects FAT, the 23 new houses are now home to some of the residents of
the former Cardoom Estate, which is undergoing a radical transformation.
The £ 2.3m housing scheme funded by the MMHA with a grant from the
Housing Corporation, includes a mix of two, three and four bedroom homes
and creates an inspirational landmark for the site.
The primary
community.
health centre facing onto the water park is now completed and opened to the local
Urban Splash will soon be starting work on refurbishing the listed Ancoats Hospital Dispensary and
building the adjacent ‗Shingles‘ development around a private shared garden. There is some commercial
space as well as apartments in these two incredible buildings,
The Alsop-designed ‗Chips‘ building is also under construction by Urban
Splash, selling all units off-plan already. These schemes will be run on a
site-wide Combined Heat and Power System, which will reduce
energy consumption across the whole of New Islington.
Planning permission has also been granted for the ‗Urban Barns‘ (homes on
stilts).
Bryant Homes‘ the Botanic scheme gives further impetus to delivering
this new iconic, environmentally-sound Millennium Community. Started in
October 2007, The Botanic is a collection of 202 contemporary apartments built around an innovative
private garden with grassed areas and mature trees.
175
Sustainable neighbourhoods:
Urban environmental sustainability can be defined as meeting the present needs of urban populations in
such a way as to avoid harming the opportunities for future
generations to meet their needs. This can only be achieved by organising and managing cities:
• To minimise damage to the environment
• To prevent depletion of natural resources
Urban economic sustainability allows the individuals and communities who live in cities to have access to
a job and a reliable income.
Urban social sustainability provides a reasonable quality of life, and
opportunities to maximise personal potential through education and health provision, and through
participation in local democracies.
176
Daily Express Friday March 3 2006
ADVERTISEMENT FEATURE
Luxury Penthouses
with a Difference in Man.chester
S
umptuous city penthouses are not
usually associated with being kind to
the environment - one normally thinks
of extravagant lofts and a penchant for excess.
But at Macintosh Vlilage in Manchester,leading
homebuilder Taylor Woodrow has come up
with something entirely different in the form of
two stunning penthouse apartments that lead
the way in being both glamorous and green.
The penthouses are the centre-piece of the
iconic Green Building - a unique experiment in
modern living, being hailed as one of the most
environmentally friendly buildings in the country.
Inside, the apartments ooze class and are a
triumph in contemporary design.The circular
shapes of the exterior are reflected in the sexy
curves running through the interiors, breaking
........
..
down the barriers of typically linear modern
apartments.
Set over two floors, they offer two bedrooms
connected to a spacious,open plan lounge/
dining area by a spir<;l staircase. The latest modcons, floor to ceiling windows and high-tech
walk
in showers provide a sleek backdrop to a
high octane lifestyle.
And with the utmost in luxury comes the
knowledge that you'll be doing your bit
for
the environment. A roof mounted wind turbine an unmissable addition to the Manchester
skyline
- and solar thermal panels provide energy for
electricity and hot water, cuttit:g the cost of
bills and saving the planet's vita1 resources.
Living in Macintosh Village will provide the
simplest access to the buzzing social scene and
business centre of Manchester - a place for
those who play as hard as they work. Oxford
Road railway station is literally a stone's throw
from your doorstep so reaching further afield
is amazingly easy.
The standard penthouse is priced at £550,000,
whilthe show penthouse is available at
"£570,000 including a top-of-the-range furniture
package. Each home includes two car parking
spaces.
There is also only one three-storey
townhouse remaining at Macintosh Village,
priced at £355,000.
For 11JOre information telephone 0161 237 1909
or log onto www.macintoshvillage.com
Prices oomtct Rt time of going topress.Images show interiors & exteriors at The Green Buifding.
65
Trafford Park Development Corporation (Property Led)
Date of Operation: 1987-1998
Type
of
Policy:
Property-led
Development Corporation (UDC)
&
Urban
Public Sector Investment: £275 million
Private Sector Investment: £1,800 million
Kellogs
Rank Hovis
Details:
By mid-1980's Trafford Park was a declining industrial area with no resident population. This policy
aimed to attract industries to the area so that the number of jobs would be increased in Trafford Park.
It was believed that as local people from Salford gained jobs, they would be able to spend their income
on improving the quality of their housing and so would not leave the area. New residents may also be
attracted to Salford due to the prospects of jobs, cheap house prices and a more pleasant environment
than previously. The emphasis of this scheme was purely economic. It did not take account of social or
environmental factors directly.
The Trafford Park Development Corporation (TPDC) claimed to have transformed the fortunes of the
area by attracting 37,000 new jobs and an additional 990 firms by 1998, bringing the total employment
in the area to 50,000. It is estimated that 60% of the jobs re-created were filled by local people. Over
£1 billion of private sector investment was attracted, including investments from US, Taiwenese and
Japanese firms. In addition, large employers such as Kelloggs and Rank Hovis remained in Trafford Park,
whereas without the UDC and improvements in infrastructure they would probably have left the area.
£4.5 million was allocated to a job training programme focused on deprived neighbourhoods in Salford,
Trafford and Manchester. The programme included the creation of "Job Shops" to advise local people on
how to apply for jobs in Trafford Park and a free jobs newspaper distributed to 100,000 households. It
reclaimed 2 km² of derelict land and planted 900,000 trees.
The limitation of the UDC policy however, was that the people that did improve their incomes tended to
move out of the Salford area to suburbs such as Heaton Park and use thair income to commute to work.
This meant that in Salford itself, the population still had an above average unemployment rate and
incomes were still low.
Salford Quays and Lowry Centre (Public Private Partnerships)
Date of Operation: 1984 onwards
Type of Policy: Salford Quays has not been developed using a specific inner city initiative. A
partnership between Salford City Council and a private development company raised the necessary
funding. Government grants were provided through existing programmes such as the Derelict Land Grant
scheme. European funds were also provided.
Public Sector Investment: £40 million
Private Sector Investment: £250 million including lottery funding for the Lowry Centre.
Aims:
Reclaim and redevelop the industrial waste land surrounding the Manchester Ship Canal docks; improve
the environment; provide jobs to the area and encourage new investment.
178
Details:
In its heyday Salford docks, at the head of the Manchester Ship Canal, was a major sea port. As the
importance of sea transport waned most quays became run down and abandoned. Salford City Council
reclaimed the derelict Manchester Ship Canal docks. It then sold sites to private developers for
building. Much of the land has been used for office development with many private businesses locating at
the docks, although about 500 waterside dwellings have also been built by housing associations for rent,
but most have been expensive luxury apartments. 4000 new jobs have been created and job training has
been provided for local people to allow them to take advantage of the opportunities.
Today Salford Quays is to Manchester what Docklands is to London - a rejuvenated area of glass and
steel, offering the best in modern architecture combined with open spaces and attractive waterways.
The Lowry Centre is a more recent project. It forms a waterfront complex comprising a theatre,
exhibition space, the Imperial War Museum and "The National Industrial Centre for Virtual Reality".
Most housing was too expensive for locals to afford and has led to major gentrification. Deprived areas
nearby in Salford and Ordsall were not improved by the develoment. Much of the development of the
docks relies on tourism (Lowry Centre, Imperial War Museum, Outlet Stores), the money from which can
vary over time. The new retail outlet stores may in fact take shoppers away from Manchester city
centre itself. The Lowry Centre offers few employment opportunities for the locals, and along with the
gentrification of the area many locals have felt socially and physically excluded.
Regeneration of Hulme (Public Private Partnership)
Date of Operation: 1992-2002 (approx)
Type of Policy: City Challenge. Public-private partnership.
Public Sector Investment: £38 million (1992-1998) from City Challenge and more than £15 million from
other sources.
Private Sector Investment: Total not yet known but will considerably exceed £60 million
Details:
Until the 1960's Hulme was a traditional inner city neighbourhood of densely packed terraced houses. In
179
the 1960's the terraced housing was cleared and replaced by a council estate of deck access flats and
tower blocks housing a much smaller population of 12,000.
Within a few years the new blocks were suffering from a variety of defects including dampness, pest
infestation and inadequate heating. The blocks were ugly and many residents felt isolated and alienated.
Although a large area of public open space had been provided in the estate it lacked a clear use and
suffered from poor maintenance.
By the early 1980's the physical problems of housing combined with the social problems faced by the
residents had stigmatised the estate. According to some indicators the estate was one of the worst in
Europe. In 1991 unemployment was 40%, 60% of the residents depended on state benefits and 80% of
households lacked a car.
In 1991 Manchester City Council won City Challenge funding to redevelop and regenerate the area. the
project was to be run by Hulme Regeneration Ltd whose board included council representatives, local
residents and representatives of AMEC (a construction company with expertise in inner city
redevelopment).
The blocks of flats had all been demolished by 1998. The new development has the following
characteristics:
Low rise housing following a more traditional design.
An emphasis on streets and squares to encourage a stronger sense of community.
A mixture of house designs and types of tenure (rented, council or bought).
Creation of a shopping high street for Hulme to help create a sense of community. The high
street's focus will be a new Asda store.
Provision of new community and job training facilities.
Creation of a new park and green spaces.
Workshops and offices will be mixed with housing to encourage firms and jobs into area.
Successes already include:
900 new homes
600 homes refurbished
540 jobs created (building contractors are encouraged to recruit local people and local people
receive training and advice on setting up small businesses)
Failures:
Local councils have to put a lot of effort into preparing bids for government money.
Some areas which would have received money in the past end up not getting any if their bid is
unsuccessful.
Resources are thinly spread over large areas.
180
Decentralisation of Retailing and Other Services
1. Characteristics of Out-of-Town Retailing
In the past there was a recognisable hierarchy of retail and service centres in UK towns and cities, with
small neighbourhood shops providing mainly convenenience goods to the local population, while the city
centres remained places that provided access to comparison goods as well as providing a range of
specialist services to the whole metropolitan area and beyond. From the 1970‘s onwards retail saw a
seismic shift in this pattern and retail since then has changed radically by decentralising, a process
encouraged under Thatcher‘s Conservative government in the 1980‘s.
Modern retailing is changing rapidly as shown by the rapid growth in superstores and retail parks, that
were first pioneered in the US and France. Superstores are large out-of-town outlets, close to
residential areas, with more than 2500 sq metres of shopping space, ample parking and good road access.
DIY, electrical and furniture superstores may cluster to form a retail park. If the retail park is ‗open
air‘ as opposed to ‗under one roof‘ and the majority of the buildings are low rise warehouses they can be
termed Box Malls. Retail parks are now common to most UK cities and towns, the largest of which are
known as Regional Shopping Centres. These regional centres draw in customers from well beyond the
region that they are located. There are now 10 regional centres in the UK and they include well known
names such as:
Metro Centre – Gateshead.
Cribs Causeway – Patchway near Bristol
Trafford Centre – Manchester
Lakeside – Thurrock
Meadow Hall – Sheffield
Bluewater – Kent
(largest in the EU 342 shops occupying 168, 900m 2)
Other services and facilities often cluster around these out of town sites, either within the boundary
of the retail park close to it. This is because postcode of such as site has a ‗green and pleasant image‘
compared to the run down inner city sites where the businesses have decentralised from. Services that
may locate here include office space that provides accommodation the service industry. E.g. Call centres,
head offices, regional headquarters, distribution and logistics centres etc as well
Other retailers
around the perimeter of the main shopping centre often set up box malls, which are cheaply built box
shaped low rise warehouse type outlets selling specialist goods such as DIY materials, consumer
electricals or ‗white goods‘ as well as computer equipment and furniture. They are not all under one roof
and tend to be less attractive and exposed to the elements. They tend not to specialise in comparison
goods like clothes that are left to the shopping centre or the inner city. There is now a trend in the UK
for these Box Mall type developments to locate on the edge of the suburbs, where they also often adopt
a leisure function such as restaurants and cinemas. A good example would be the Middlebrook Retail
Park, Bolton (off Junction 6 on the M61).
Characteristic features include:
Extensive layout of a very modern, designed building
Impressive architecture of the buildings – common features of design
Extensive areas of free car parking – both on ground and in multi-storey facilities
Well-planned access routes – roundabout and dual carriageway – designed for the motor vehicle
Impressive landscaping – trees, lake, picnic tables
Major national retailers present
The retailing revolution has centred on the development of out-of-town shopping precincts on
Greenfield sites at the edge of large urban areas
Out of town sites also provide collection points for recycling facilities
181
Some out of town sites have monorail, metrolink or even park and ride schemes that link them to
town centres and the rest of the suburban metropolitan area
Some even have their own police stations sited nearby.
The Trafford Centre
The Trafford Centre is an example of one such enormous regional centre! It
comprises 1.9 million ft² of space, 3 miles of shop fronts that includes 230
shops, 60 restaurants, cafes and bars (though it can take up to 40 minutes
of queuing to get served), a 1,600 seated food court, 20 screen cinema (one
of the busiest in the country), a Laser Quest, 18 ten pin bowling lanes, and
Paradise Island Adventure Golf. The infrastructure is equally impressive with
19 escalators, 43 hydraulic lifts and parking for 10,000 cars. Beyond the
Centre is Trafford Quays Leisure Village with its hotels, golf driving range,
fitness centre, a real snow ski facility and Airkix sky diving simulator. Other
stores such as a huge ASDA superstore as well as the B and Q Warehouse
have chosen to decentralise to this location.
It is located approximately 9km to the west of Manchester city centre, close to junctions 9 and 10 on
the M60 (formerly M63).
The Trafford Centre catchment area is larger and
more populous than any other regional shopping
centre in the UK and comprises 5.3 million people
within a 45-minute drive-time band, with a total
potential retail expenditure of £13 billion.
The Trafford Centre idea was conceived in 1984 by
the Manchester Ship Canal Company. Planning
permission was sought in 1986 and approval was
finally upheld by the House of Lords in 1995.
Trafford Council initially liked the idea because they
thought the development would provide jobs (35,000
people unemployed within a 5 mile radius), and
stimulate investment near to an area with economic
and social problems. Also very little public investment
was required because the Manchester Ship Company was willing to invest £150 million without public
subsidy. However it took over 10 years to be granted permission due to the potential for severe
congestion on the M63 (now improved into the M60 ring road). It took 27 months to build and opened on
the 27th September 1998.
The Trafford Centre currently attracts 36 million visits annually with an average weekly footfall of
465,000. The Trafford Centre is a tourist destination, attracting visitors from all over the UK and
beyond. Over 4,200 coaches visit annually bringing more than 150,000 people to the Centre.
2. Causes of Out-of Town Retailing
Factors causing the development of out of town retailing parks include:
Increased personal mobility caused mainly by the private car; could be linked to rise in personal
incomes and emancipation of women
Expensive car parking tariffs in central areas of cities, contrasting with the prospect of free
parking at the retail parks
182
Increase in the number of motorways and motorway junctions allowing greater market areas for
retailers, and ease of access for customers
Greater congestion in central areas of cities reducing journey speeds & making shopping
unpleasant task
The increasing trend in shopping being perceived as a family social activity; including the
development of other parallel entertainment outlets on the same site
Land is relatively cheap to buy at the urban fringe and the extra space available allows the
provision for large car parks and later expansion of the development. Cheaper land prices lead to
greater economies of scale, and therefore lower prices.
Accessibility is usually much better at these localities, due to an outer ring road or motorway
interchange.
Other transport interchange facilities may also exist, such as bus and railway stations, or even
supertrams
Freezers allow shopping to be done in bulk and less frequently, the ‗one stop shopping‘ idea.
3. Effects of Out-of-Town Retailing
The Trafford Centre for instance has brought increased economic wealth and increased
congestion to the area.
The view of some politicians is that retail development in out of town locations has gone far
enough. In some cases planning permission for further expansion has been refused and it now
seems unlikely that any new regional out of town retail parks will be constructed. Many opposed
to retail parks argue that they cause continued urban sprawl into the dwindling green belt.
They are now seen as socially divisive. This means that affluent people with cars have access to
these facilities but the under 17‘s, the elderly and people on lower incomes have restricted
access as bus services are sometimes infrequent or involve many changes.
New life has been injected into existing CBDs to avoid the problems of economic decline in
these areas. There have been improvements to pedestrian areas and shopping malls, CCTV and
other safety systems, more Sunday/late night opening and special events. The new developments
in the city centres of Newcastle (Eldon Square), Manchester (former Arndale Centre) and
Birmingham (Bull Ring) are most notewothy.
Some supermarket chains are turning their attention back into their existing and new CBD
outlets. For example, J. Sainsbury have developed new ―local‖ stores which do not sell the full
range of goods found in their larger outlets, but do stock items targeted at local needs.
Retail parks often create much needed jobs in run down areas, but critics are quick to point out
this it is not usually the disadvantaged labourer that ends up being employed there, instead
these jobs are often filled by young part time causal worker from the nearby suburbs. The
surrounding Trafford towns of Stretford, Davyhulme, Sale and Altrincham and the adjacent City
of Salford provided a ready supply of manpower to fill the 7,000-plus job opportunities
generated by The Trafford Centre (although approximately 53% are part-time), as well as the
3,000 jobs created during the construction phase. However, the question has to be asked does
the quality of these jobs both in terms of pay and enjoyment go far enough to fill the void left
behind by mass loss of skilled and semi-skilled workers that used to be employed on the
Trafford Park Site and those who lost their jobs in the neighbouring Salford area after
manufacturing decline.
Effects of out of town retail on the high street in the North West:
Many high streets and city centres in the North West region have been affected badly by the
development of the Trafford Centre. Retail outlets have closed and CBD areas have become run down, as
customers are attracted to the ease of access, the availability of free car parking and the whole
entertainment experience of the Trafford Centre.
183
Some city centres like Blackburn have begun to "fight back", attracting customers by creating undercover shopping centres with trees and cafes, pedestrianising main shopping streets, reducing parking
fees for shoppers and encouraging late-night and Sunday opening. This attempt to attract shoppers back
to the city centre is being helped by recent government restrictions on any further regional shopping
centres.
Their effect on high street shops has often been very considerable and usually detrimental, for
example:
Decline of city centre shops causing shop closures and job losses.
Reduction in city visitors and pedestrian densities and footfalls
Increase in the number of charity shops in former premises of chain stores.
Greater emphasis on office developments and other services.
However, increasingly government policies favour central shopping areas and neighbourhood schemes
over out-of-town developments. Also the perceived threat of out-of-town retailing to the town centre
has resulted in a variety of attempts to maintain or revitalise retailing in central areas:
Improvements to pedestrian areas and CBD shopping malls.
Creation of traffic free zones.
CCTV systems to improve security.
More Sunday and late night opening to attract shoppers.
Special events are put on to make the CBD area a "destination place".
This revitalisation of retailing in central areas is of increasing importance and in some places this has
coincided with inner city redevelopment. E.g. Manchester. Renovation may entail full-scale development,
but however, as a rule, redevelopment only affects small sections of the urban core.
4. Attitudes to Out-of-Town Retailing
People living near to these large superstores will have a variety of views on such developments.
The nearness to such places give them a greater opportunity to shop without the need to travel
into the city centre which they would be happy about.
They also provide greater employment opportunities for local people, especially students and
young people at weekends so these groups would be supportive of this change in retail
However, the resultant increase in traffic in the area, with the consequent pollution and noise,
would not be looked on favourably by local residents. Local streets may become more congested
and any provision for all night shopping will mean continual movement in the area, especially from
delivery lorries, so local residents often oppose new developments or expansion to new ones.
Store managers of inner city shopping centres such as the Arndale Centre would be unhappy that
the competition from places like the Trafford Centre would drive away potential customers
attracted by free parking, late opening times and easier journeys.
However, in the long run
such managers may be happy as it would give them the negotiating power to confirm new funding
with town councils to expand and improve existing facilities e.g. the Arndale Centre, Manchester
City Centre.
Local and national governments once looked at retail parks favourably, as it was seen as a quick
fix solution to redevelopment and job creation. Politicians now see the tide has turned and the
decentralisation of retail and other services is viewed by some as a mistake due to the damage it
causes to existing town centres, and the damage to the environment through transport and need
for new roads as well as the retail units themselves which all encroach on finite greenbelt.
184
The Redevelopment of City Centres: Case Study Manchester
The Response of City Manchester City Centre to Out of Town Retail and an assessment of its
impact:
Manchester is being marketed as a vibrant, exiting city region with a strong managed image. It has
become a 24 hour city since the Commonwealth Games in July 2002 and redevelopment has successfully
impacted on its fortunes.
The night life until the mid 1990‘s centred on the independent
clubs and pre-club pubs in areas such as the Northern Quarter
and Deansgate, but that has changed. The Central Manchester
Development Corporation (a U.D.C. just like the London
Docklands), successfully improved the quality of the built
environment by repaving the streets, improving the street
lighting and provided grants for cleaning buildings to a
standard not seen in the 1980‘s. Piccadilly Gardens was
completely redesigned in time for the Commonwealth Games.
Piccadilly Gardens
Events in the town centre such as, the August Mardi Gras attracts well over 100,000 visitors to the
centre, while the carnivals which occur at the Chinese New Year are also very popular. Street
performers including individual musicians, music groups, performers and pavement artists can frequently
be found in the Market St. area especially in the pre-Christmas period. Large video screens put in place
for the Commonwealth Games have been retained i.e. in Piccadilly Gardens and Exchange Square
(opposite Selfridges), so that people can congregate to watch sporting and other events. The ‗Big Ferris
Wheel‘ once only a Christmas attraction has now also become a permanent feature. These events have
allowed the town centre to be successfully branded as a tourist destination and easily outcompetes with
the atmosphere generated by events in the Trafford centre such as frequent fashion shows (e.g. TV
show how to look good naked – film in the ‗Great Hall‘), book signings and live music.
In May 2003 the city held its Europa Festival ―11 days of fun‖ to
coincide with the Champions League Cup Final at Old Trafford.
The Sunday before the match saw the first running of the Great
Manchester Run, a 10km. road race/fun run starting and
finishing in the city centre. It attracted over 10,000 runners
and thousands of spectators as well as television coverage and
has continued as an annual event.
Christmas Markets
Seasonal attractions such as the German Christmas Markets
have become increasingly bigger and better over the last 5 years
and have forged a reputation for Manchester City Centre as a
leading seasonal tourist attraction.
The growth of leisure outlets throughout the city has been remarkable. Cafe bars restaurants or pubs
have been inserted into the ground floor of almost any building with space. They are characterised by
large windows and where there is room tables and chairs are set out on the street. City life in the
European manner has come to Manchester. Three new bars appeared on Peter St. which already had a
Prenez, Joop, and Hullabaloo, probably due to the development of The Great Northern Shopping
Experience in the old railway warehouse opposite. Also on Peter St. the Albert Hall, once a Methodist
Hall, has been refurbished as a new Brannigan‘s bar, a cross between a bar and a night club which
compliments their Royales discotheque further up the street. Other relatively new leisure facilities on
185
redeveloped sites include The Printworks cinema and bars complex (includes
Tiger Tiger), the M.E.N. Arena and Bridgewater Hall concert venues and the
G.MEX exhibition centre.
Manchester Arndale
Printworks Complex
Shopping centres have also capitalised on
the redevelopment of the city over the last
decade and most recently the Arndale
shopping centre has been refitted to such a
standard that it now rivals the Trafford
Centre. New shopping centres with leisure
functions have also been created such as
the Spinningfields Development just off
Deansgate and the Triangle Shopping
centre in Exchange Square.
Accessibility to the city centre has been improved since the mid
1990‘s with the development of the Metrolink tram system and
shoppers buses, both of which link Piccadilly and Victoria
stations as well as the main bus stations eg. Chorlton St. and
Piccadilly Gardens. However, parking in Manchester still remains
a problem as many car parks are expensive and there are
insufficient spaces for Manchester car users. The proposed
congestion charge would also compound problems for the
motorist entering the city centre and it is no surprise that
central business oppose it.
Metrolink: Improved Access
Late night opening has also been a recent feature of Manchester City Centre in order to compete with
the 10pm opening of the Trafford Centre. From 1-6-05 the city centre shops open until 8 pm every day.
This strategy was announced on the same day as a new John Lewis store opened at the Trafford Centre
retail park so producing an instant response by the city centre!
186
Contemporary Sustainability Issues in Urban Areas
(i) Waste Management and Recycling and its alternatives
Waste management is pressing issue for city authorities to solve, especially as Mega Cities (and
millionaire cities) have grown to such a size that the products that these populations consume is creating
so much waste that needs to be disposed of, but the question is where, or more importantly how?
Sustainability is becoming an increasingly important ideal on the political agenda as waste management
has an environmental dimension. For cities to become more sustainable (i.e. meeting the needs of the
present generation without compromising the needs of future generations) a framework that supports
the philosophy of efficient waste management has begun to be generally accepted and is being realised
through the implementation of the Waste Management Hierarchy (see below) at a local, national and
international level.
The Waste Management Hierarchy
Waste Management: is the collection,
transport, processing and recycling of
waste materials usually produced as a
result of human activity with the desired
outcome of reducing impact on health, the
environment and the aesthetics of an
urban area.
It can also involve the
recovery of useful products from the
waste.
Diversion Rate: the
amount
of
material in Kg/household/yr that is
recycled and/or composed instead of
going to landfill
How it Works: The waste hierarchy refers to the "3 R‘s", namely: reduce, reuse and recycle, which
classify waste management strategies according to their desirability. The 3 R‘s are meant to be a
hierarchy, in order of importance. The aim is to extract the maximum amount of practical benefits from
products and generate the minimum amount of waste once they are used.
Whose responsibility is the waste?
Waste management practices differ for developed and developing nations, for urban and rural areas,
and for residential and industrial producers. Management for non-hazardous residential and
institutional waste in metropolitan areas is usually the responsibility of local government authorities,
while management for non-hazardous commercial and industrial waste is usually the responsibility of the
generator.
In the UK, after collection, the residual waste is disposed of (either through landfill or incineration) and
then recycled, remanufactured and/or energy recovered (if energy is recovered from waste it is classed
as recovery rather than disposal - see incineration below).
The Waste Disposal Authority (WDA) is a local authority charged with providing disposal sites to which
it directs the waste collection authorities for the disposal of controlled waste, and with providing civic
amenity facilities. In England these are the County Councils and the Unitary Authorities.
Waste in the UK
It is estimated that nearly 36 million tonnes of municipal waste was generated in the UK in 2004/05. A
total of 30 million tonnes of this waste was collected from households. That's about 500 kg or half a
187
tonne of household waste per person! Although household rubbish represents a relatively small
percentage (about 9%) of the total amount of waste produced it is a highly significant proportion
because it contains large quantities of organic waste which can cause pollution problems, as well as
materials such as glass and plastics which do not break down easily. Despite the Waste Hierarchy, the
majority of the waste in the UK ends up in municipal landfill (67%).
Both the residual waste collection and the management of the collection of recyclable waste, varies
significantly from one Waste Collection Agency (WCA) to the next.
Waste Targets
Local Authorities have been set targets by the EU and the UK government to decrease waste going to
municipal landfill and increase recycling to between 10-33% over the next years.
In 2005/06:
Recycling and composting of household waste was nearly 27% for England as a whole. This has nearly
quadrupled since 1996/97.
Through these and additional measures the hope is to increase the recycling of household waste to the
targets set by the Government and the Welsh Assembly:
To reduce household waste not re-used, recycled or composted by 45% by 2020 (from 22.3 million
tonnes in 2000 to 12.2 million tonnes in 2020).
Higher targets for recycling and composting of household waste – at least 40% by 2010, 45% by
2015 and 50% by 2020.
These are significantly higher than the old targets (set in Waste Strategy 2000) of 30% by 2010 and 33%
by 2015. They will take England up on a par with its European neighbours.
How will these targets be achieved?
Fines: The Local Authorities (LA‘s) that don‘t meet these targets will be fined for waste they burry
over their quota.
Pay as you throw: There has also been the suggestion that individual households should be billed for
the waste they produce as way of a ‗bin tax‘ (nick named ‗pay as you throw‘ by the media). This is a way
the Local Authorities can pass on the increasing costs of waste disposal to the consumer. In order to
encourage green behaviour families who continued to fill their bins would pay more and those who
188
produce less waste pay less. In Bristol, the now the Liberal Democrat-run Bristol City Council is
applying (March, 2010) to the Department for Environment Food and Rural Affairs (DEFRA) for
permission to trial a voluntary scheme. Under the local authority's plans, residents of 2,362 homes in
the city will be invited to take part in a six month pilot scheme whereby, each bin would be weighed by
the Local Authority bin operators and logged in by a computer reading a microchip on the bin. Each
household then receives a fairer bill for the refuse they produce. There is even the suggestion that
households reducing their waste could even claim money back under this proposal. If the pilot is
successful Bristol would be the first city in the country to introduce such a scheme, despite of the
media fury and public opposition both locally and nationally that surround this as well as similar future
schemes.
Waste Management Statistics for Selected Regions
Waste management methods vary widely between regions and areas within regions for numerous reasons,
including: the type of waste materials produced in that area; nearby land-use; availability of suitable
landfill sites; as well as the success rates of waste diversion.
Map of Overall Diversion in the Northwest
Manchester
2007/8
No of households: 213,942
No of Materials Collected Kerbside: 6
No of Bring / Civil Authority Sites: 407
Total Diversion Rate: 233kg/hh/yr
Household Waste Total: 918kg/hh/yr
2007/8
Household Recycling Rate: 16.72%
Blackburn and Darwen
No of households: 59,501
No of Materials Collected Kerbside: 7
No of Bring / Civil Authority Sites: 32
Total Diversion Rate: 361kg/hh/yr
Household Waste Total: 1160kg/hh/yr
Household Recycling Rate: 26.63%
Map of Overall Diversion London
Corporation of London
2007/8
No of households: 5839
No of Materials Collected Kerbside: 7
No of Bring / Civil Authority Sites: 20
Total Diversion Rate: 314/hh/yr
Household Waste Total: 923kg/hh/yr
Household Recycling Rate: 33.02%
189
Describing the Spatial Pattern For London: there appears to be a clear pattern when mapping the
diversion rate for London, the central areas appear to have lower diversion rates, i.e. more of the waste
generated in these regions ends up in Municipal landfill. With distance from the city centre the
diversion rates increase as a greater proportion of people are obviously recycling their waste in these
areas e.g. Harrow and Hillingdon to the NW and Bromley and Bexley to the SE. Richmond, Kensington
and Sutton to the South also have higher rates of diversion of waste.
Reasons: It is possible that the people living nearer central London Boroughs on average are less
affluent and therefore one could argue less aware of the recycling message and less likely to be
informed on green issues, so therefore don‘t take action to reduce their waste. It is also probable that
the rich more suburban Boroughs such as Richmond may benefit from higher levels of taxation (through
council tax) and therefore these councils have more money to spend on recycling facilities and services
as well as promoting these facilities and services, so therefore provision of services is probably an
important controlling factor on green behaviour. It may also be justified to argue that in some parts of
central London where the cityscape is built up and people live in smaller housing units and flats, that it is
harder for councils to provide them with recycling facilities as there is insufficient space, hence people
recycle less. It is also probable that some younger people living nearer the city centres are less likely to
take ownership over the waste they produce and tend not to recycle, such groups use more consumable
products that need to go to landfill in the first place e.g. takeaway trays, fast food packaging,
convenience meals – all linked to younger lifestyles. Another problem manifests itself when people live in a
house-share situation e.g. students living in a shared house, where 4 or 5 separate lifestyles go on in
one house but nobody takes responsibility for the recycling.
Waste management Methods
1.
Waste Disposal
a. Landfill
Landfill has been the UK‘s favoured way of disposing of waste. Involves burying waste in the ground in
former quarry sites or mines, made possible by the UK‘s geology and industrial history. Landfill was once
seen as a cheap option but the number of new potential sites is dwindling. Many of the UK‘s landfill sites
that are now disused were constructed by the Victorian‘s without proper design or management and
hence caused and continue to cause many problems. Problems include dangerous build up of Methane gas
as well as subsidence, wind-blown litter, vermin, flies, odours and groundwater contamination from the
chemical plume that leaks out of the landfill site through time, termed leachate. These problems are
hard to rectify and involve expensive retrofitting, however if planned and managed properly landfill is a
cheap and convenient option, however the cost of landfill has doubled over recent years as it is now
heavily taxed pre 2002 cost were £15/ tonne now costs are in the order of £35/tonne. Most of the
modern landfill sites in the UK are fitted with an impermeable geotextile membrane or lined with
impermeable clay that prevents leaks of leachate as well as wells installed to measure and burn off any
methane gas. Modern landfill sites also monitor any contamination of the groundwater store and river
water through chemical testing in boreholes and rivers. Sometimes it may be necessary to fit barriers
to a site that has begun to leak after it has been reinstated (covered with soil, closed off and
landscaped). Some of the more advanced landfill sites have bio-reactors fitted that convert the waste
material using enzymes into useful products such as methane that can be burned to make electricity or
local heat industrial buildings.
b. Incineration
Involves the combustion of waste material and can be done with or without energy recovery.
Incinerators convert waste materials into heat, gas, steam and ash. It is the most successful way to
dispose of harmful biological wastes e.g. hospital waste.
190
Around 8% of municipal waste is incinerated in energy from waste (EfW) plants in the UK compared to
around 50% in Sweden and Denmark .
The technology to burn waste has developed significantly over the past 50 years and incinerators are
now much cleaner than they used to be. However incinerators are still controversial as they cause
atmospheric pollution through the emission of gaseous pollutants that has caused high public concern as
they may possibly represent a public health risk. Some plants include a furnace that is connected to a
boiler that provides heat for local offices and/or steam to produce electricity, however even though
some of this energy is recovered, it may still not be the best way to reprocess out waste from a
resource point of view and we may be in fact wasting resources.
2. Recycling Methods
a.
Physical Reprocessing
This is a popular recycling option in most developed countries and refers to the widespread collection
and reuse of common materials. These material may be separated at the point of collection by use of
different coloured bins/containers or at the point of processing e.g. at a MRF (see below).
At a Material Recycling Facility large quantities of mixed dry recyclate are separated into their
constituent components and baled prior to sending to reprocessors.
At the MRF the materials typically travel along a conveyor belt and the specific fractions are gradually
removed. Metals may be extracted using magnets, paper taken off by weight and other screening
devises used.
Due to the problems of plastic identification, at many MRFs these are still hand separated, however
advances in technology are allowing some MRFs to use electronic means to identify and separate
different plastics from the waste stream.
b. Biological Reprocessing
Anaerobic digestion
This is the process in which biodegradable waste is decomposed in the absence of oxygen to produce
methane which can be collected and burnt as a fuel to produce electricity. Currently this method is
under investigation in order to provide more alternatives for this fraction of the waste stream.
Mechanical biological treatment
This is a generic term for several processes which include MRF and composting. It is simply mechanical
processes of drying and bulk reduction for household waste. Drying eases separation of recyclables. The
material remaining is highly combustible. Two main systems operate to treat this material
1. Mechanical biological treatment. Residual mixed waste is mechanically sorted into recyclable materials, refuse
derived fuel (RDF) and an organic fraction. The organic fraction is treated and used as soil conditioner, the RDF
goes for further treatment and then used for EfW e.g. gasification or pyrolysis. Some material is rejected and
landfilled.
2. Biological mechanical treatment. The residual, mixed, unsorted waste is homogenised and treated by part
composting and drying. Sorting and treatment follows, using mechanical processes so that recyclate, RDF and soil
conditioner streams, as well as the rejected fraction of residual waste, are produced.
c. Energy Recovery
191
The energy content of waste products can be harnessed directly by using them as a direct combustion
fuel, or indirectly reprocessing them into another type of fuel. Various methods exist:
Refuse derived fuel (RDF)
Refuse derived fuel is created from fractions of the waste stream with a high energy content; typically paper,
plastics, textiles and wood etc. Mixed waste is separated using screens and mechanical processes, to remove glass
and metals for recycling, the biodegradable content and RDF. RDF can be used for EfW, thermal treatment or in an
existing industrial process. RDF can often be used in conjunction with other fuels in a process known as cocombustion, such as exists in cement kilns where worn tyres may be used.
Other energy from waste processes
There have recently been developments in new technologies for the disposal of waste, many of which are becoming
increasingly viable options for the disposal of household waste:
Gasification
Whilst traditionally using fossil fuels like coal, gasification has the capability to accept mixed fuels including waste.
The fuel (biomass) is heated in anaerobic conditions producing a low energy gas containing hydrogen, oxygen and
methane which can then be used to generate electricity. With new technology the resulting emissions give a
favourable alternative to incineration.
Pyrolysis
This is an emerging technology similar to gasification, however in this process there is total absence of oxygen.
Pyrolysis is used on carbon materials and refuse derived fuel (RDF). This produces gas, oil and char. Char is
produced as a by-product of this process and can be recovered for use as a fuel e.g. gasification or alternatively
disposed of. Gas and oil can be processed and then combusted and used to generate electricity. Strict legislation is
imposed on these processes due to their hazardous emissions such as heavy metals. These can be reduced by the
fitting of flue-gas cleaning equipment.
3. Waste reduction methods
An important method of waste management is the prevention of waste material being created, also
known as waste reduction. Methods of avoidance include reuse of second-hand products, repairing
broken items instead of buying new, designing products to be refillable or reusable (such as cotton
instead of plastic shopping bags and emergence of ‗bags for life‘), encouraging consumers to avoid using
disposable products (such as disposable cutlery), removing any food/liquid remains from cans, packaging,
and designing products that use less material to achieve the same purpose (for example, lightweighting
of beverage cans).
An Example of a City‟s Website Campaign designed to Encourage Recycling: Recycle for Greater
Manchester.com (set up by Greater Manchester Waste Disposal Authority GMWDA).
Our Waste Solution:
Across Greater Manchester your waste will be treated and processed in a revolutionary way.
1.3 million tonnes of waste will be handled each year and we will be building on the past successes of
Greater Manchester which has seen a rise from 7% recycling in 2002/03 to over 30% today (2010).
192
This new way of treating your waste will ensure an impressive recycling rate of at least 50% by 2015,
and will also divert more than 75% of waste away from landfill. It will provide a world class solution for
Greater Manchester‘s household waste, serving approximately 973,000 households (AGMA, 2009) and a
resident population of over 2.27 million.
See Website and Leaflet for further details (I have simplified the layout of the information
leaflet below).
Viridor Laing (Greater Manchester) Ltd
In April 2009 Greater Manchester Waste Disposal Authority (GMWDA) and
Viridor Laing (Greater Manchester) Ltd (VLGM) signed a £3.8bn contract –
Europe’s largest ever waste management Private Finance Initiative (PFI) – that will
enable the authority to achieve its vision of providing a world-class solution for our
waste.
Working Together
Viridor Laing (Greater Manchester) Ltd is a consortium made up of Viridor Waste Management Ltd,
a leading UK waste management and recycling company which currently handles nine million tonnes
of waste per year, and John Laing plc, a leading investor, developer and operator of privately
financed public sector infrastructure projects.
GMWDA is the largest of the English waste disposal authorities, providing services for more than
970,000 households across nine Districts: Bolton, Bury, Manchester City, Oldham, Rochdale,
Salford, Stockport, Tameside and Trafford.
Our targets
• A recycling and composting rate of 33% for 2010
• A recycling and composting rate of 50% for 2020
.
• Meeting the following – and intermediate – LATS targets: 2009/10, 2012/13 and 2019/20
• Stabilising waste growth to 1% per annum by 2010
• Reducing waste growth to 0% per annum by 2020
The signing of the contract heralds the beginning of an enormous and exciting construction
programme to create a network of state-of-the-art waste and recycling facilities across the Greater
Manchester area, creating over 5,000 jobs in the process.
It will take about four years before all the new infrastructure is up and running. During this time,
the consortium will be responsible for various activities including: design and planning, building,
financing, operations/maintenance and communications
Proposed technologies and facilities
193
Household Waste Recycling Centres (HWRCs)
There will be a major overhaul of the current network of HWRCs including proposed new sites and
upgraded facilities, many of which will be ‘split level’. There will also be four public education centres
at Raikes Lane, Bolton; Bredbury Parkway, Stockport; Pilsworth, Bury; and Longley Lane,
Manchester.
Materials Recovery Facility (MRF)
A brand new ‘clean’ MRF, located at Longley Lane in Manchester, will sort
cans, glass and plastic bottles from the kerbside collections across the
Greater Manchester area.
Mechanical Biological Treatment (MBT) and Anaerobic Digestion (AD)
New MBT/AD plants will replace the existing treatment facilities at Longley Lane and Reliance Street
(Manchester), Cobden Street (Salford), Bredbury Parkway, (Stockport) and Arkwright Street,
(Oldham), (MBT with no AD).
Refuse Derived Fuel (RDF)
Approximately 275,000 tonnes of High Calorific Value Refuse Derived Fuel (HCV-RDF) from the
MBT/AD process will travel by rail to a Combined Heat and Power (CHP) facility at Ineos Chlor in
Runcorn, Cheshire.
Energy from Waste (EfW)
The EfW facility at Bolton will continue to operate.
In-Vessel Composting (IVC)
Four new enclosed IVC facilities will treat kitchen and garden waste at: Bredbury Parkway,
Stockport; Waithlands, Rochdale; Salford Road, Bolton; and Trafford.
Green Waste Shredding (GWS)
The facilities at Every Street, Bury and at Longley Lane, Manchester will be upgraded and improved.
Transfer Loading Stations (TLS)
These will be upgraded – or new ones created - at: Raikes Lane, Bolton; Every Street, Bury;
Arkwright Street, Oldham; Waithlands, Rochdale; Cobden Street, Salford; Bayley Street, Tameside;
and Bredbury Parkway, Stockport.
194
(ii)
Transport and its Management:
systems.
Case Study: Greater Manchester
Towards Sustainable Travel
The Development of integrated, efficient and sustainable
Sustainable transport (or green transport) is a concept, an
ideology and, in some countries, a governmental policy that
consists of strengthening or replacing the current transport
systems of an urban/suburban area with more fuel-efficient,
space-saving and healthy lifestyle-promoting alternatives.
Greater Manchester has a relatively high proportion of journeys to work by car, 65%, compared to 61%
nationally, with figures of 75,000 cars on the 10 NW Motorways daily in (2005-2006) and a total of
95,000 vehicles per day overall. The UK-average rate of traffic growth between 1997-2007 (14%), but
transport CO2 emissions are relatively low at 1.8 tonnes per person. This implies relatively short trip
lengths. For AM peak commuting to the regional centre (Manchester and Salford city centres), 61% of
trips are made by non-car modes (Greater Manchester Transport Statistics, GMTU, 2007) and there is
evidence to suggest that car journeys to the city centre are falling (without the congestion charge).
Achieving sustainable travel behaviour is a huge challenge, yet the planned transport infrastructure and
TDM investments and associated urban planning initiatives illustrate the potential for change at the
metropolitan scale.
Inner Manchester and central Salford are seen as occupying a critical role as the regional centre and a
significant proportion of the ‗transport strategy‟ (see later) is directed at serving (and managing) radial
movements to and from this commercial core. Urban areas in the UK have double the traffic flows
compared to rural areas .
Sustainable transport systems make a positive contribution to the environmental, social and economic
sustainability of the communities they serve. Transport systems exist to provide social and economic connections,
and people quickly take up the opportunities offered by increased mobility. The advantages of increased mobility
need to be weighed against the environmental, social and economic costs of the transport system in question.
What is being done to develop sustainable transport systems in Manchester
1. „Placemaking‟:
There is an Association of Greater Manchester Authorities (AGMA), a wellestablished network of ten local authorities that has operated since local government reorganisation in
1986 to enable strategic coordination across the Greater Manchester city region, with key functions
including transport and forward planning.
The context for the AGMA has been strengthened in recent years by the emerging national policy
emphasis on the city region spatial dimension to encourage economic growth. Within the Greater
Manchester conurbation, AGMA has been a central actor in an ongoing process of economic restructuring
and integrating a series of separate former industrial towns such as Oldham, Bury, Bolton, Wigan,
Salford, Manchester, Stockport etc. to form an
internationally competitive city region.
2. The Transport Innovation Fund: In support of
Greater Manchester‘s regeneration agenda, a
Transport Innovation Fund (TIF) bid to Central
Government was debated and developed between
2006 and 2008, including a congestion charging
scheme. The scheme proposed to charge residents
driving into the city in a similar way to the London
Congestion Scheme, with the revenue from the
charge being invested back into public transport.
Although the charging scheme was rejected in a
195
2008 referendum in which Greater Manchester residents voted unanimously against it, many of the
associated transport infrastructure/traffic management and travel behavioural change measures may
eventually receive funding through other mechanisms.
3. Public Transport Investment: The congestion scheme was conceived as a major opportunity for
public transport investment in the conurbation as a whole, which in turn has been clearly linked to
broader economic, social and environmental objectives. The original TIF package represented £2.8bn of
investment, with 80% of the public transport component being operational in advance of congestion
charging. Planned public transport investment included additional sections of the Metrolink tram
network (which is still in the pipeline), capacity increases in local rail services, a restructured pattern
of bus services, additional park and ride spaces, plus improved interchange, ticketing and information
facilities. There has been a recent funding announcement by the AGMA for two Metrolink extensions, a
package of cross-city bus improvements and new park-and-ride sites. They have received £195 million of
Central Government funding with the remaining funding (£244million) being contributed by local
authorities.
Manchester Metrolink: Future Metrolink
Work is under way on a multi-million pound upgrade and expansion of the Metrolink network. By 2012
four new lines will nearly double the size of the tram network with 20 miles of new track and 27 new
Metrolink stops. The new lines will go to Oldham and Rochdale, Chorlton, Droylsden and MediaCityUK
based in Salford Quays (where BBC have relocated out of London to).
These new lines will help reduce congestion on our roads with five million fewer car journeys every year,
and will increase the number of trips passengers make each day from 55,000 to more than 90,000.
In May 2009, a special fund of £1.5 billion was agreed for 15 transport schemes in Greater Manchester.
These include further Metrolink extensions to Oldham and Rochdale town centres, Didsbury,
Manchester Airport, Ashton-under-Lyne and a second route across Manchester city centre.
A number of other changes including a fleet of new trams to reduce crowding, new state-of-the-art
ticket machines and improved passenger facilities and information are also planned,
4. Strategic Transport Network: The AGMA have adopted a sub-regional, multi-mode of transport
approach, which effectively integrates spatial planning into its strategic transport strategy. It uses
corridor approach to land use planning, based on 15 radial corridors, each of which contains a major
public transport link – rail, tram or bus rapid transit – and which collectively develop a series of networks
for the conurbation as a whole. A key element of the strategy is to provide improved and expanded
public transport infrastructure and services in support of regeneration. Corridor partnerships have been
set up in sub-regions and are detailed in the Local Transport Plan 2006/07-2010/11.
5. Accessibility of Key Facilities: Due to the volume of trips generated through employment and visits
to key facilities, such as hospitals, schools and recreation centres, [public transport] accessibility from
all areas within the catchment is critical to sustainable travel. Priorities include improving accessibility
to key facilities and employment from areas of significant deprivation, so as to maximise impact on social
196
inclusivity, employment levels and productivity, as well as pursuing secondary economic benefits through
developing the conurbation‘s secondary town centres.
For example, additional sections of the
Oldham/Rochdale Metrolink line provide connections to a series of town and district centres providing a
catalyst to economic regeneration and housing market renewal.
6. Traffic Demand Management: Has been implicated by the local metropolitan councils setting policies
and strategies, it involves reducing the travel demand especially for single-occupancy private vehicles
and redistributing this demand in space and time through use of alternatives. Its success has been
weakened by the rejection of the congestion charge and this highlights problems in developing
sustainable transport systems. Pricing measures such as [lower] ticket prices and [higher] parking
charges in the City Centre drive the change in demand downwards.
7. Other Schemes and Future Ideas: Other small scale schemes and initiatives are also vital to an
integrated sustainable transport network such as park and ride schemes, walking bus schemes for school
children (taking cars off the roads at peak times), the development of cycle lanes in town centres and
the development of the Sustran‘s Cycle Network have all played a role in Manchester. Other
government backed schemes such as Ride to Work allow people to receive tax breaks of bikes and cycle
equipment. There are even plans to allow the opening up of hard-shoulders on motorways to ease peak
flows, as well as dedicating one lane to car-sharers who carry more than one passenger, although there
are practicality issues. Other schemes have been trialled in Bristol to ease congestion such as turning
off traffic lights in the CBD in order to ease flow, if successful this could be rolled out nationally. Car
sharing websites have been set up which operate in the Manchester region (liftshare.com), where people
can make arrangements to share lifts with each other.
197
198
Download