Final Paper

advertisement
Ray Kameda
UEP 232
May 5, 2011
Tsunami Vulnerability Assessment and Crisis Mapping of Actual
Tsunami Damage in Miyagi, Japan
Introduction
My project on the 2011 Tohoku Earthquake and tsunami has personal significance for me
because my close family lives in the afflicted Miyagi Prefecture. The damage from the tsunami was
catastrophic and much more significant than the damage from the earthquake. Seeing the huge amount
of loss of life, it seemed as though more people could have been saved if adequate tsunami vulnerability
assessments were used to educate people living or working in coastal high tsunami vulnerability zones.
Many tsunami evacuation shelters were engulfed by the tsunami because their location and
construction were based on conservative tsunami extent estimates. Disaster prevention policy should be
based on overestimated risk assessments in the future. I was also briefly involved with a crisis map team
started by students at the Fletcher School of Law and Diplomacy (http://www.sinsai.info/ushahidi/), so for
this project I wanted to analyze how effective the crisis map was at helping those in most need.
Because I was interested in tsunami risk and the crisis map I helped construct, I conducted a
tsunami vulnerability assessment of Miyagi prefecture and investigated how the Japan Crisis Map
twitter reports were spatially distributed. The tsunami vulnerability assessment I conducted was mostly
based on the assessment conducted by Sinaga et al. (2010) with the topographical variables of elevation,
slope, direction in relation to the tsunami, and the proximity to the coast. These rasters were overlayed,
which had the resolution of 90m2 cells. I was able to obtain a shapefile of the crisis map based on twitter
reports from March 11th -22nd. I investigated the effectiveness of the vulnerability assessment and the
crisis map by comparing them to actual tsunami damage. I compared my tsunami risk analysis with the
tsunami flooded inundation zone detected from satellite imagery, and I also analyzed how many people
might be at risk using population density data gathered by the Japanese census. I compared the crisis
map distribution with officially reported casualties by district.
Literature Review
Sinaga TPT, Nugroho A, Lee YW, and Suh Y (2010) GIS Mapping of Tsunami Vulnerability: Case
Study of the Jembrana Regency in Bali, Indonesia, Journal of Civil Engineering. 15(3):537-543.
This is the primary article from which I based my tsunami risk assessment design on. It
investigated the tsunami vulnerability of the Jembrana Regency in Bali, Indonesia based on the regions
topographical features which included: the variables of topographic elevation and slope, topographic
relation to tsunami direction, coastal proximity, and coastal shape. They also overlayed the vulnerability
map onto a land use map to determine the types of land that are of the highest tsunami risk. They found
that residential and agricultural areas were especially at risk. This paper concludes that GIS-based
analysis is useful for a wide variety of disaster assessment to aid regional planning for disaster mitigation
and prevention. But they noted the limitations of a study like this in that only 5 geospatial variables
were used, when in reality many more variables are involved that determine the tsunami high risk areas.
But their relatively simple method was very suited for my tsunami vulnerability analysis, because I had
access to fairly high resolution elevation data from the shuttle mission and the general lack of freely
available GIS data on the region.
Taubenbock H, Goseburg N, Setiadi N, Lammel G, et al., 2009. Last mile preparation for a potential
disaster- interdisciplinary approach towards tsunami early warning and an evacuation information
system for a coastal city of Padang, Indonesia. Nat. Hazards Earth Syst Sci. 9:1509-1528.
This paper was a more indepth tsunami vulnerability analysis that used an interdisiciplinary
approach to how a tsunami early warning and evacuation information system can be done. Because of
the recent frequency of natural disasters causing widespread destruction , it addresses the needs for
effective management that addresses the needs of multiple stakeholders and decision –makers to be
involved with the systemic risk identification and evaluation. This study combines in detail inundation
modeling, urban morphology analysis, population assessment, socio-economic analysis of the
population at risk and evacuation modeling. Because of the lack of high resolution spatial data of the
region in Indonesia, remotely sensed datasets were extensively used, including the SRTM elevation
dataset that I used for my project. This paper just showed how complex a tsunami vulnerability
assessment could get, and the importance of not only using geospatial attributes in tsunami risk analysis
but also the social variables as well. For my project I was only able to include population density.
Gao H, Wang X, Barbier G, and Liu H (2011) Promoting coordination for disaster relief- from crowd
sourcing to coordination. Lecture Notes in Computer Science. 6589: 197-204.
This article is basically an introduction to how crowd source information can be very useful for
governmental and nongovernmental organizations can more effectively respond to a crisis and
efficiently provide relief. Some of the crowd sourcing applications it mentions are twitter, facebook,
Ushahidi, and Sahana. It describes the shortfalls though especially the fact that there is no real central
controls so it is difficult to avoid conflicts between different organization interests. It addresses how
crowd sourcing falls short for disaster relief for several reasons. First, there is no common mechanism
designed for the coordination between relief organizations, there are no adequate security features,
and lastly that it does not always provide all the right information needed for relief efforts. It then
proceeds to explain how a better crowd sourcing application addresses these issues. I sought to
investigate the shortfall of the crowd sourced Ushahidi data to see if it properly represented the people
in most need.
Kerle N. 2011. Remote sensing based post-disaster damage mapping- ready for a collaborative
approach? Earthzine. <http://commons.un-spider.org/pdf/20110323_earthzine_article.pdf>
This article discussed how post-disaster has evolved from a solely remote sensing map approach
to a more collaborative approach of crowd sourcing data. It discusses the limitations of the processes of
remote sensing of disaster areas, newly emerging mapping methods, and the rapidly expanding field of
stakeholders of both mappers and users than ever before. First it addresses the problems of the
traditional remote sensed image-based damage mapping, which included, lack of a universal
nomenclature and common scales, different organization interests, non-experts, static maps, and lack of
damage validation. It also mentions the new trends of disaster mapping especially crisis mapping, google
maps, open street maps, etc. This article concludes that even though resolution of remote sensing
images have increased, identification of lower damage scales is still very difficult. They found that the
crisis mapping methods developed during the Haiti earthquake have shown that there is a tremendous
potential for post-disaster info gathering based on non-professionals, great willingness of volunteers.
Despite this they didn’t advise volunteers mapping damage and expert based collaboration would be
more useful.
I investigated the damage from the tsunami and the effectiveness of a crisis map. The distribution of the
spatially-referenced twittered reports mapped on the Japan Crisis Mapwas investigated to see whether it
represented the damaged areas of most need. Creating a tsunami vulnerability analysis based on
topographical features and see if it properly represents the actual damage and casualties caused by the
tsunami.
Data Sources
 Japan National and Regional Planning Bureau
(http://nlftp.mlit.go.jp/ksj/jpgis/jpgis_datalist.html)
Coastline shapefile
Year updated:2006
Metadata Link: http://nlftp.mlit.go.jp/ksj/jpgis/product_spec/KS-PS-C23-v2_0.pdf
Water Bodies shapefile
Year updated:2007
Metadata link: http://nlftp.mlit.go.jp/ksj/jpgis/product_spec/KS-PS-W09-v1_1.pdf







Landuse Raster (didn’t use but downloaded and processed Shape2raster):
Metadata: http://nlftp.mlit.go.jp/ksj/jpgis/product_spec/KS-PS-L03b-v1_1.pdf
XML file
Elevation(didn’t use but downloaded and processed):
Japan Crisis Map http://www.sinsai.info/ushahidi/
Shapefile Point data of twitter feed from:
http://geocommons.com/overlays/100145
SRTM Elevation Raster
From H:drive
Harvard Japan Sendai Earthquake Data Portal.
Shapefile Japan Administrative boundaries
Link: http://cegrp.cga.harvard.edu/japan/?q=resources_data
E-STAT.gov.jp Census Population
Raster (500m2): shapefile
Table with population data: .txt
Year: 2005
University of Colorado Flood Observatory,
Link: http://floodobservatory.colorado.edu/Sendai.html
Date: March 13th,2011
Earthquake Casualty Data Text file
From: Matthew Bloch from New York Times
Date: official casualties up to march 31st.
ESRI World Topographic base layer
Link:
http://www.arcgis.com/home/item.html?id=f2498e3d0ff642bfb4b1558283
51ef0e
Data Preparation
Preprocessing
1. Project shapefile layers to WGS 1984 UTM 54N
Some layers were JGD2000 which had to be converted to WGS 1984
Some were not projected and they were all on geographic coordinate system s so I
had to project it to the UTM54N
2. Preprocess administrative boundaries
I used the selection by attribute and selected only the miyagi prefecture
municipalities and exported the data.
3. Preprocessing of SRTM elevation raster data
Clipped my study around the Miyagi prefecture and projected using data export
4. Preprocessing of population density raster data from E-STAT.gov.jp
The rasters available on the website are organized in a numbered grid format and
the miyagi prefecture included the areas: 5740, 5741, 5640
Numbered grid
http://www.estat.go.jp/SG2/toukeichiri
The data available from e-stat had the population data in a text file format (left box on the
right side) with a corresponding gridded shapefile (right box on the right side). So I had
to download and join the table to the grided shapefile based on their key_code, which was
unfortunately a string for one and an integer for another so I had to use Microsoft access
to change the Key-code to a string for the population data table.
Then I convert them into a rasters using the polygon to raster tool.
I then combined the rasters using the Mosaic to New Raster tool.
5. Preprocessing the crisis mapping data
Project to UTM 54N
6. Preprocessing the casualties data
The casualties text file I got from Matt I used Microsoft Access to make the
dataset usable for t ArcGIS and define the variable types.
The data had longitude and latitude data so I plotted the table using Display XY Data.
I then projected the casualties to UTM 54N.
Tsunami Vulnerability Assessment
1. The slope and the elevation data are done so to make the coastal proximity and relative to
tsunami direction rasters, I used the euclidean direction tool to make a 10km (the max
distance the tsunami travelled inland) joined buffer zone around the coastline line vector.
I made sure that the raster was snapped to the elevation raster with 90 m2 cell sizes.
2. I clipped this raster using the extract by mask tool to the Miyagi prefecture so only the
inland distance zone was left.
3. I made a buffer zone for the Euclidian distance similarily and also clipped it to Miyagi
4. Reclassifying
I basically reclassified everything from 1-5 with 5 being the “very High Risk” with the
reclassify tool based on the Sinaga et al., 2010 paper.
5. I used the weighted overlay and overlayed all my raster sets to create my vulnerability
assessment. I loosely based it on Sinaga’s final weighting but I was missing the coastal
shape variable because I couldn’t figure out how to do it.
6. Then I looked at the overlap of the shapefile inundation zone and see how it overlayed
my vulnerability raster, and then I did another extract by mask of the raster that was
overlapped by the inundation zone.
To calculate the area of the overlap, I used the count of class “5” or very high risk
raster in the attribute table and multiplied it by 90x90 to get the total area. I used the
count data to get the % overlap as well.
7. With the compiled population dataset, I classified it fairly arbitrarily but making sure that
lower concentrations were represented in the population density map and I used the
extract by mask to from the population raster from the inundation zone. I also extracted
only the “very high “ risk part of the vulnerability raster and extracted the population
raster with this too.
To calculate the population at risk I simply did the statistics and recorded the
sums. This might not be very accurate because I did not have the population raster
the 90 90 cell size, because when I tried to snap it before, it created an inaccurate
population density map.
Crisis Map
1. Using the crisis map point data depicting the twitter reports, I joined the points to the
municipality map to get the counts reports for each municipality so I can compare the
distribution to the casualty map, which was by district.
But to do this, I had to use Microsoft excel to make sure the names of the
municipalities coincided. (for some reason access couldn’t open the .dbf files)
2. For the analysis I opened the attribute tables of both the casualties and calculated the
counts (of either casualties or reports) relative to the total number of counts in all of
miyagi (sum of all the counts) to a new data field in the table. This way I could compare
these completely different datasets, which are on completely different scales.
Then I made a new data field where I calculated the difference of the relative
deaths from the twitter so negative numbers mean the relative twitter reports were less
than the relative casualties.
Difficulties
Data collection and preparation was probably the most time consuming part of my project
because of the general lack of data being that the earthquake had only recently occurred and the
problem of importing data from Japanese sources onto ArcGIS. There were several sites like the
Harvard Japan Sendai Earthquake Data Portal, that have posted the limited amount of GIS
information about the earthquake, but many of the files were either kml files for Google Earth or
already analyzed image or pdf files. Initially, I was converting kml files from google earth into
shapefiles, but that didn’t work well because of problems with the Arc2Earth program importing
datasets with Japanese. Shapefile datasets in Japanese did not work on my computer because the
encoding could not be read, so I had to install the Japanese Language Pack onto my own
personal computer, so I had to do the entire project on my machine. Figuring out how to make
the Japanese datasets work took a lot of time especially the elevation and landuse raster data
from the Japan National and Regional Planning Bureau
(http://nlftp.mlit.go.jp/ksj/jpgis/jpgis_datalist.html), which was only available as XML files and
a XML to shapefile conversion tool called the KSJ tool, which is shown below was necessary,
but I didn’t end up using the elevation or land use data sets in my project because SRTM raster
dataset was a higher resolution than the 500m2 dataset available.
I also had a lot of trouble manipulating datasets because I had to learn some basic
Microsoft access skills to make sure the attributes matched when I joined tables.
Conclusion:
From my tsunami vulnerability analysis, I think I found a fairly good correlation between
inundation zone and my high risk zones. It was also interesting to find from my estimation of
how many people might have been in the zone, about half were able to escape. But that means a
large proportion of people in the heavily inflicted zone were most likely killed and it might be
even more as more deaths get confirmed every day. It was also interesting to see how my
predictions were right that the crisis map underrepresented the most heavily afflicted zones and
overrepresented the sendai city area. All in all I think just as Kerle (2011) said, the rapidly
expanding practice of collaborative and crowd sourced post disaster mapping has great potential
to improve the effectiveness of relief efforts in crisis.
Japan has technologically advanced earthquake-resistant architecture and earthquake
evacuation drills are a regular part of life for Japanese citizens, but I think there is a disconnect
between disaster prevention policy and proper risk assessment. The stoic mentality and the
ability of the people to endure through thick and thin has allowed poor policy based on outdated
risk assumptions to continue. As was seen with the unprecedented 2010 Sumatra Earthquake,
natural catastrophes are extremely stochastic and disaster prevention policies are based on far too
conservative perceptions risk. The problem is that earthquakes occur on a vast geologic time
scale, so the modern record of past earthquakes is an extremely small cohort of earth’s
earthquake history. I think that the policy makers, and bureaucrats are largely ineffective because
people don’t demand or participate in politics enough. As I watched the footage of angry
refugees yelling at the TEPCO officials visiting the shelters I hope the Japanese people will get
angry and demand adequate policy based on proper risk assessments.
Future Research
Unfortunately I didn’t end up using the landuse data for my final project but it would
been another interesting addition to the analysis to either classify different types of landuse to
tsunami vulnerability or determine which types of land uses were the most affected.
I also had edited the date attribute table of the crisis map post, so it would’ve been
interesting to investigate the temporal aspect of how the crisis map was plotted, and see if the
most heavily afflicted districts were reporting relatively later. What I really wanted to do was
compare it to an interesting data I found from NASA about the night lights, where the red are
black outs. I wanted to create polygons of these black out areas and overlay it with the crisis map
data to see any inverse correlation in distribution.
Also it would have been better if I used a population dataset that showed the distribution of
populations during the day. Because the dataset I used was based off of where people lived so it
would be more of how populations are distributed during the night.
The dataset I used for the crisis map dots also were not categorized as the original crisis
mapped reports are such as, people searches, free lodging offers, calls for help etc, as it was with
the KML file of reports, but unfortunately the Arc2Earth update that I was told would fix the
problem of converting it into a shapefile was not released early enough. It would be interesting to
investigate the distribution of the different types of twitter reports.
Download