Users guide Cascade Event Model

advertisement
Associação para o Desenvolvimento da Aerodinâmica Industrial
Rua Pedro Hispano, 12; 3031-289 Coimbra Portugal
www.adai.pt; 239708580
CRISMA Cascade Event Model (CEM)
Title
CRISMA Cascade Event Model (CEM)
CRISMA – Modelling crisis management for improved action and
preparedness
http://www.crismaproject.eu
Project
This project has received funding from the European Union’s Seventh
Framework Programme
Anna-Mari Heikkilä: crisma.coordinator@vtt.fi / +358207223490
Date
May, 2015
Developed by ADAI (wwwadai.pt) and AMRA (www.amra.it)
CEM
Miguel Almeida: miguelalmeida@adai.pt / +351964437136
1/14
INDEX
Cascade Effect model Code ..................................................................................... 3
General overview of the cascade event map transformation mechanism ............ 3
Probability function transformation matrices ......................................................... 4
Web processing service ........................................................................................ 4
Cascade effect model practical instructions ............................................................. 5
Submission webpage overview ............................................................................. 5
Triggering event .................................................................................................... 5
Triggered events ................................................................................................... 6
Final event ............................................................................................................. 6
Web Processing Service Request......................................................................... 7
Annex A.
Python code ........................................................................................... 9
2/14
CASCADE EFFECT MODEL CODE
General overview of the cascade event map transformation mechanism
The code produced is a standalone software tool for cascade event processing that
requires the world state information to be provided independently. In this case, this
information will be provided by the general CRISMA platform. The architecture of the
code in the developed software is based on the algorithm presented in Figure 1.
Figure 1: General overview of the CEMT mechanism
The CE Map Transformation mechanism (CEMT, Figure 1) consists of a Web
Processing Service (WPS) and a Web Map Service (WMS) running on a server:

WPS gives the server the ability to run processes, which can perform
operations on the input data.

WMS allows the server to store and publish the data that results from the
operations performed.
The input data is comprised of shape files, event chains and probability function
transformation matrices (PFTM), which are stored in an external data repository. The
server returns the output data to the CE tool in the form of a rendered image
representing the transformations performed on the input data.
3/14
Probability function transformation matrices
The PFTMs contains structured data about the probability of an initiator event
triggering another event in the chain. These transitions are defined in the form of
three columns in a multi-row layout. As an example, the PFTM representing the
event of electricity poles collapsing in case of the occurrence of an earthquake, the
first column in the PFTM determines the intensity of the triggering event in the event
chain; the second column specifies the intensity of the triggered event (in this specific
case a Boolean variable indicating the event occurrence), and finally the third column
specifies the probability of occurrence of a determined effect on the world.
Web processing service
In order to perform a transformation on a set of data, the server must receive a WPS
HTTP GET request from the CE Tool, specifying the location and the identifiers of
the data to be used in the transformation process (shape files, event chains and
transformation matrices). After the request is received, the WPS process (Figure 2)
is launched, and becomes responsible for performing the desired transformations on
the world state information regarding the events chain being analysed. It retrieves
and opens a shape file located in the data repository designated in the request. The
data contained in the shape file is extracted and transformed taking into account the
data present in the transformation matrix corresponding to the current event chain.
The data resulting from the transformation process is inserted in a new shape file
and then rendered to a specified image format. Finally, the WPS/WMS server returns
the resulting image to the client CE tool for graphical visualization of the results.
Figure 2: WPS process in detail
4/14
CASCADE EFFECT MODEL PRACTICAL INSTRUCTIONS
Submission webpage overview
The CRISMA Cascade Effect Model simulator can be accessed in the webpage,
which is deployed in the machine running the service. Its layout is shown below:
Figure 3: File submission interface
Upon accessing the webpage, three components from the event chain can be
submitted for processing:

Triggering event.

Triggered event.

Fuel map.
Triggering event
The triggering event is defined by a shapefile, which consists of several files as
shown in Figure 4.
Figure 4: Shapefile components
The shapefiles define characteristics and properties that can be geographically
represented. In the case of the triggering event in Figure 5, the intensity plot of an
earthquake is represented in relation to the epicentre.
5/14
Figure 5: Triggering event visualization
Triggered events
The next event in the event chain is also defined by a shapefile that represents the
points (Figure 6) that the triggering event will have influence upon. The remaining
events can be represented by transition matrices, due to the fact that the points over
which the effects will be applied are already defined.
Figure 6: Triggered event visualization
Final event
The map which contains the final effect, in this case the fuel map, is also defined by
several files that represent several materials classified according to different
combustion probabilities.
6/14
Web Processing Service Request
After the upload of the three files is completed, a processing request is performed to
the Web Processing Service (WPS):
http://SERVER_IP/cgibin/pywps.cgi?service=wps&version=1.0.0&request=execute&identifier=ProcessM
ap&datainputs=%5Bevents=Shakemap_MainEvent,PGA_Poles,CRISMA_PilotD_F
uelMap;datalocation=/usr/lib/cgi-bin/map/%5D
This request is composed by:

The service address: http://SERVER_IP/cgi-bin/pywps.cgi

The service type: service=wps

The service version: version=1.0.0

The WPS action: request=execute

The process identification: identifier=ProcessMap

The input from the user to the process (in this case the three or more
events to analyse):
datainputs=%5Bevents=Shakemap_MainEvent,PGA_Poles,CRISMA_PilotD_FuelM
ap

The location of the files over which the process will perform the processing: data
location: datalocation=/usr/lib/cgi-bin/map/
Regarding the WPS service implementation, it is composed by the installation of a service
that supports several Python defined processes. The processes which are available for
server execution can be visualized through the request: http://SERVER_IP/cgibin/pywps.cgi?service=wps&version=1.0.0&request=getcapabilities
This request returns XML information (Error! Reference source not found.) that
can be visualized in the browser’s window or used for further processing on the
client-side.
7/14
Figure 7: XML information returned by the server
After the execution of the processing request, the server returns a message about
the successful conclusion of the data processing where information can be retrieved
regarding the data transformation process and others.
8/14
ANNEX A.
PYTHON CODE
from pywps.Process import WPSProcess
import os
import logging
# import pickle
import numpy
import math
from scipy.spatial import cKDTree
from osgeo import gdal
from osgeo import ogr
from osgeo import osr
cascadeEvent = []
probabilityCoords = []
def readFileToList(infile, separator):
nestedlist = []
for line in infile:
listline = map(float, line.split(separator))
nestedlist.append(listline)
return nestedlist
def transformData(dataLocation, event, etype, countEvents):
global cascadeEvent
global probabilityCoords
logging.debug("countEvents: %d" % (countEvents))
if etype == "FuelMap":
# Open Fuel Map and translate to ascii xyz
# Optimize and check if there is already a fuel map with this name
or if the fuel map changed
fuelMaplocation = dataLocation + event + ".tif"
fuelMapSource = gdal.Open(fuelMaplocation)
logging.debug("Opened Fuel Map")
format = "XYZ"
driver = gdal.GetDriverByName(format)
fuelMapDestination = driver.CreateCopy(dataLocation +
"TifToXYZ.txt", fuelMapSource, 0)
fuelMapDestination = None
fuelMapSource = None
# Cross-Reference Fuel Map
# Open parsed fuel file
parsedFuelMapLocation = dataLocation + "TifToXYZ.txt"
fuelFile = open(parsedFuelMapLocation, 'r')
# Open Fuel Probability matrix
fuelProblocation = dataLocation + event
fuelProbfile = open(fuelProblocation + ".csv", 'r')
fuelMatrix = readFileToList(fuelProbfile, ";")
# Go through event probability and find the nearest point to crossreference in the fuel map
for Item in cascadeEvent:
9/14
Item.append("10000")
Item.append("")
logging.debug("Items Appended")
fuelMapData = []
fuelMapCoords = []
for fuelauxline in fuelFile:
x, y, z = map(float, fuelauxline.split(" "))
fuelMapDataLine = []
fuelMapCoordsLine = []
fuelMapDataLine.append(z)
fuelMapData.append(fuelMapDataLine)
fuelMapCoordsLine.append(x)
fuelMapCoordsLine.append(y)
fuelMapCoords.append(fuelMapCoordsLine)
cascadeCoords = []
for Item in cascadeEvent:
cascadeCoordsLine = []
cascadeCoordsLine.append(Item[0])
cascadeCoordsLine.append(Item[1])
cascadeCoords.append(cascadeCoordsLine)
cascadeData = numpy.array(cascadeEvent)
cascadeMap = numpy.array(cascadeCoords)
dataMap = numpy.array(fuelMapData)
coordsMap = numpy.array(fuelMapCoords)
k_neighbours = 1
tree = cKDTree(coordsMap)
dists, indexes = tree.query(cascadeCoords, k=k_neighbours)
logging.debug("FuelMap processed")
# Create a shapefile
driver = ogr.GetDriverByName("ESRI Shapefile")
shapePath = dataLocation + "Probabilities.shp"
if os.path.exists(shapePath):
os.remove(shapePath)
data_source = driver.CreateDataSource(shapePath)
# create spatial reference
srs = osr.SpatialReference()
srs.ImportFromEPSG(4326)
layer = data_source.CreateLayer("Probabilities", srs, ogr.wkbPoint)
layer.CreateField(ogr.FieldDefn("Longitude", ogr.OFTReal))
layer.CreateField(ogr.FieldDefn("Latitude", ogr.OFTReal))
field_prob = ogr.FieldDefn("Prob", ogr.OFTString)
field_prob.SetWidth(24)
layer.CreateField(field_prob)
for probabilityLine in probabilityCoords:
feature = ogr.Feature(layer.GetLayerDefn())
feature.SetField("Longitude", probabilityLine[0])
feature.SetField("Latitude", probabilityLine[1])
feature.SetField("Prob", probabilityLine[2])
wkt = "POINT(%f %f)" %
float(probabilityLine[1]))
(float(probabilityLine[0]) ,
point = ogr.CreateGeometryFromWkt(wkt)
10/14
feature.SetGeometry(point)
layer.CreateFeature(feature)
feature.Destroy()
data_source.Destroy()
# Cascade events other than the first one which requires shapefile
handling
elif etype == "Event" and countEvents >= 2:
# Process Probability Function
matrixlocation = dataLocation + event
matrixfile = open(matrixlocation + ".csv", 'r')
matrix = readFileToList(matrixfile, ";")
for Item in cascadeEvent:
intensity = Item[2]
probability = matrix[0][2]
for probFunction in matrix:
if float(probFunction [0]) < float(intensity):
probability = probFunction[2]
# Create probabilityList
probabilityLine = []
probabilityLine.append(Item[0])
probabilityLine.append(Item[1])
probabilityLine.append(probability)
probabilityCoords.append(probabilityLine)
cascadeEvent = probabilityCoords
else:
ShapeFile = dataLocation+event+".shp"
driver = ogr.GetDriverByName('ESRI Shapefile')
dataSource = driver.Open(ShapeFile, 0)
# 0 means read-only. 1 means writeable.
# Check to see if shapefile is found.
if dataSource is None:
logging.debug("Could not open %s" % (ShapeFile))
else:
logging.debug("Opened %s" % (ShapeFile))
layer = dataSource.GetLayer()
# Process Shakemap
if etype == "Triggering":
# Map projection conversion for adequate resulting map
srsepsg = layer.GetSpatialRef()
srsepsg.AutoIdentifyEPSG()
os.system("ogr2ogr -s_srs EPSG:" +
srsepsg.GetAttrValue("AUTHORITY", 1) + " -t_srs EPSG:4326 " + dataLocation +
event + "1.shp " + dataLocation + event + ".shp")
inputEPSG = 32633
outputESG = 4326
# Create structure to store data in incorrect lat and long
point = ogr.Geometry(ogr.wkbPoint)
11/14
# Add coordinates to structure
logging.debug("Before coordinate system conversion")
for feature in layer:
point.AddPoint(feature.GetField("x"),
feature.GetField("y"))
inSpatialRef = osr.SpatialReference()
inSpatialRef.ImportFromEPSG(inputEPSG)
outSpatialRef = osr.SpatialReference()
outSpatialRef.ImportFromEPSG(outputESG)
coordTransform =
osr.CoordinateTransformation(inSpatialRef, outSpatialRef)
# transform point
point.Transform(coordTransform)
# print point in new coords
#Putting Coordinates and PGA on a list
Item = []
Item.append(point.GetX())
Item.append(point.GetY())
Item.append(feature.GetField("PGA")*10)
cascadeEvent.append(Item)
# Process Poles
if etype == "Event":
# Process Probability Function
matrixlocation = dataLocation + event
matrixfile = open(matrixlocation + ".csv", 'r')
matrix = readFileToList(matrixfile, ";")
# verify each pole
for feature in layer:
compCoord = []
prevdist = 100000
lati = feature.GetField("Lat1")
longi = feature.GetField("Long")
for Item in cascadeEvent:
# calculate distance to pole
dist = math.hypot(Item[0] - longi, Item[1] - lati)
# store sample with shortest distance to pole
if dist < prevdist :
compCoord = Item
prevdist = dist
intensity = compCoord[2]
probability = matrix[0][2]
for probFunction in matrix:
if float(probFunction [0]) < float(intensity):
probability = probFunction[2]
# Create probabilityList
probabilityLine = []
probabilityLine.append(longi)
probabilityLine.append(lati)
probabilityLine.append(probability)
probabilityCoords.append(probabilityLine)
12/14
# The items under evaluation are the new dataset
cascadeEvent = probabilityCoords
return
class Process(WPSProcess):
def __init__(self):
##
# Process initialization
WPSProcess.__init__(self,
identifier = "ProcessMap",
title="Process Map",
abstract="""Processes an event chain""",
version = "1.0",
storeSupported = True,
statusSupported = True)
##
# Adding process inputs
self.dataLocation = self.addLiteralInput(identifier =
"datalocation",
title = "data location",
allowedValues = '*',
type = str)
self.events = self.addLiteralInput(identifier = "events",
title = "events",
allowedValues = '*',
type = str)
##
# Adding process outputs
self.dataOut = self.addLiteralOutput(identifier = "output",
title="Output Vector Data")
self.textOut = self.addLiteralOutput(identifier = "text",
title="Output")
##
# Execution part of the process
def execute(self):
# retrieve events from input
events = self.events.getValue()
x = ""
eventsList = []
i = 0
# while there are events to retrieve
while i < len(events):
# build str with event
if events[i] == ",":
# add event to list if there is a ,
eventsList.append(x)
x = ""
else:
x += events[i]
i += 1
eventsList.append(x)
logging.debug("Events: %s" % (eventsList))
# Enable GDAL/OGR exceptions
gdal.UseExceptions()
dataLocation = self.dataLocation.getValue()
# Go through each event and calculate probabilities
13/14
countEvents = 0
while countEvents < len(eventsList):
# Transform
if countEvents == 0:
transformData(dataLocation, eventsList[countEvents],
"Triggering", countEvents)
elif countEvents == len(eventsList)-1:
transformData(dataLocation, eventsList[countEvents],
"FuelMap", countEvents)
else:
transformData(dataLocation, eventsList[countEvents],
"Event", countEvents)
countEvents += 1
# just copy the input values to output values
self.dataOut.setValue(self.dataLocation.getValue())
self.textOut.setValue("Done")
14/14
Download