Document 10615175

advertisement
 CAUGHT (ZG) AND LPAZ (GT) EXAMPLE
0.0 DATA
0.1– INTRODUCTION TO DATA:
For this example, we have downloaded and formatted the data in the correct form to plug into
the ANT workflow. For this example we are using data from the CAUGHT (ZG) network and
the GTSN (GT). There are about 50 broadband stations from CAUGHT where we downloaded
the LHZ (1sps vertical component) and one broadband station from the GTSN where we
downloaded the BHZ (40sps vertical component).
0.2 – DATA FILE DIRECTORY STRUCTURE:
We are using SEED volumes for this example, as they require the least amount of additional
work, but you can also use miniSEED volumes. The SEED volumes are organized so as to have
all of the data for one day (24 hours of continuous record) separated by each network. The
SEED volumes are stored in the following directory structure */seed_data/YEAR/MONTH,
where “YEAR” corresponds to the year (e.g. 2016…) the data was recorded and “MONTH”
corresponds to the month (e.g. 1, 2, 3…) the data was recorded (n.b. no year directory should
have more than 12 subdirectories and no month subdirectories should have more than 31 files).
0.3 – DATA FILE FORMATTING:
The files used in this example require the following naming convention
NETWORK.YEAR.JULDAY.MONTH.DAY.0.seed, where “NETWORK” is the network code
(e.g. ZG…) for the network, where “YEAR” corresponds to the year (e.g. 2011…) the data was
recorded, “JULDAY” corresponds to the Julian day (e.g. 123…) the data was recorded,
“MONTH” corresponds to the month (e.g. 5…) the data was recorded, and “DAY”
corresponds to the day (e.g. 3…) the data was recorded (e.g. ZG.2011.123.5.3.0.seed).
1.0
AMBIENT NOISE CROSS CORRELATIONS
1.1– EXTRACT SAC FILES FROM THE SEED VOLUMES AND FILTER THEM:
There are two main procedures or steps in this section. The first one involves running the code
RUN_KMW_XX_1.csh for each network, in this case ZG and GT. In this example we have the
dir structure ZG_EXAMPLE and GT_EXAMPLE. We will run the code
RUN_KMW_ZG_1.csh and RUN_KMW_GT_1.csh in each of the network directories. Each
script calls the tshell script cv_do_CO_XX.csh.
1.2 – CALCULATE CROSS-CORRELATIONS AND STACK THEM:
The second step calculates daily cross-correlations and stacks them into one cross-correlation
time series. First we must create a directory with the filtered files from the previous step
merged into one directory. We will copy all the files from each year directory from each
network directory in one new directory called ALL_EXAMPLE. This new directory will have
the same directory structure as the individual network directories but will have all of the data
from both networks. In this directory we will run the code RUN_KMW_ALL_2.csh to create
all of the cross-correlations and stack them in one cross-correlation function for each station
pair.
1.3 – CODES REFERENCED IN THIS STEP
cv_do_CO.csh program (tcsh script)
==================================
Script cv_do_CO.csh is the main procedure which organizes work spaces and sequential run of five
programs written on gcc/g77: cv_sa_from_seed_holes_RESP, cut_trans_RESP, filter4,
whiten_rej_phamp, and justCOR_mv_dir.
cv_do_CO.csh is called from the shell command line as:
...> cv_do_CO.csh year b_month e_month
where,
year - processing year;
b_month - starting month, 1 <= b_month <= 12;
e_month - ending month, 1 <= e_month <= 12, b_month <= e_month.
When cv_do_CO.csh starts it immediately creates the workspace work1 and the working directory
year/b_month for the program cv_sa_from_seed_holes_RESP. Also, script prepares input data for
the cv_sa_from_seed_holes_RESP program, namely: creates year/{b_month}_1 directory and copies
data of the first month b_month from archive to year/{b_month}_1 directory, creates event list
K.M.W & J.R.D
2 Tuesday, January 17, 2016
(references to daily SEED volumes files) in the working directory year/b_month, extracts station
information from monthly data into shortcut station list file station.lst and stores it under
year/b_month and year/{b_month}_1 directories, starts cv_sa_from_seed_holes_RESP program.
cv_sa_from_seed_holes_RESP program
==================================
cv_sa_from_seed_holes_RESP produces from SEED volumes daily waveform segments (events) for
all given days and stations as a set of binary SAC files (one SAC file per fixed day and station), and
retrieve corresponding response information (ASCII files) in the evalresp compatible format.
cv_sa_from_seed_holes_RESP provides the following additional functionalities:
- merge multiple waveform segments into single waveform segment
- analyse possible data gaps and reject data with total gaps exceeding
treshold value.
- linearly interpolate gaps
- create and store on a disk auxiliary Reference Table (RT) that keeps
events/stations/waveforms parameters and references to waveform
locations on a disk.
- analyse and store for further correction in RT possible fraction of
waveform time, if sample epoch time is not multiple to the sampling step
Usage
=====
...> cv_sa_from_seed_holes_RESP LHZ theshold_gap
where
LHZ - channel name
theshold_gap - theshold_gap*100 is maximum allowed data gap in waveforms. Recommended
value 0.1
cv_sa_from_seed_holes_RESP must be started in year/b_month directory.
Input data
==========
a) File with the fixed name station.lst . The station.lst file is a plain ASCII file given in a tabular form.
The file includes shortcut information about seismic station. Each line consists of four fields separated
by one or more spaces: network, sta, lon and lat.
Here,
3 K.M.W & J.R.D
Tuesday, January 17, 2016
network - two character name of a seismic network
sta - seismic station name, up to 6 characters
lon - geographic longitude in degrees
lat - geographic latitude in degrees
Example:
CI
CI
IU
TA
TA
TUQ -115.923900 35.435800
VES -119.084700 35.840900
TUC -110.784700 32.309800
109C -117.105100 32.888900
A04A -122.707000 48.719700
b) File with the fixed name input_ev_seed The input_ev_seed file is a plain ASCII file given in a
tabular form. The file includes information about location of a SEED volume file for a given year,
month and day.
The location of a SEED volume is described by two sequential lines.
Line 1: ind, year, month, day, comments
ind - fixed text string " PDE", must be placed from the first position in line. The first symbol in line
must be space.
year - four digits year
month - number of the month of the year
day - number of the day of the month
comments - any text
All fields must be separated by one or more spaces.
Line 2: path
path - path to the SEED volume file. Text must start from the first
position in a line.
Example:
PDE 2005 4 2 0000000000000
../4_in/ALL_2005_4_2
PDE 2005 4 3 0000000000000
../4_in/ALL_2005_4_3
PDE 2005 4 4 0000000000000
../4_in/ALL_2005_4_4
63.52 -147.44 11 8.50 Ms GS 9C.G F.S.
63.52 -147.44 11 8.50 Ms GS 9C.G F.S.
63.52 -147.44 11 8.50 Ms GS 9C.G F.S.
4 K.M.W & J.R.D
Tuesday, January 17, 2016
Output Data
==========
Waveform directories. For each day cv_sa_from_seed_holes_RESP creates in the working directory
subdirectory form of yyyy_M_D_0_0_0 to store one day waveforms in binary SAC format (like
CI.GRA.LHZ.SAC) and corresponding instrument response files (like RESP.CI.GRA..LHZ). yyyy M
and D are year, month and day of all data stored in yyyy_M_D_0_0_0 directory.
File sac_db.out. The file sac_db.out is the binary dump of final state of the auxiliary RT on a disk.
File event_station.tbl. The file event_station.tbl is the ASCII dump of some fields related to auxiliary
RT. This is records date, station name, path to SAC file, complete start waveform time t0 with
possible global time shift frac in sec, and number of samples.
Example.
2005_4_30_0_0_0 Y22C 2005_4_30_0_0_0/TA.Y22C.LHZ.SAC t0: 2005/120:0:0:0 frac: 0 s 86401 s of
record
2005_4_30_0_0_0 BOZ 2005_4_30_0_0_0/US.BOZ.LHZ.SAC t0: 2005/120:0:0:1 frac: -0.171 s 86400 s
of record
2005_4_30_0_0_0 BW06 2005_4_30_0_0_0/US.BW06.LHZ.SAC t0: 2005/120:0:0:1 frac: -0.164 s 86400
s of record
cut_trans_RESP program
======================
When cv_sa_from_seed_holes_RESP is finished cv_do_CO.csh shell script starts another program
cut_trans_RESP on the same working directory. cut_trans_RESP program removes the mean and the
trend from waveforms, and provides waveform correction for the instrument response by SAC
evalresp function, broadband filtering, and cutting desired segment of data in the given time range.
cut_trans_RESP writes the global time shift from auxiliary RT into the header of each SAC output file,
field user1.
Usage
=====
...> cut_trans_RESP T1 T2 T3 T4 t1 npts
where,
T1 T2 T3 T4 - corner periods of broadband pass filter in seconds.
Corner periods are real numbers, and T1 > T2 > T3 > T4 > 0
Corresponding corner frequencies are 1/T1, 1/T1, 1/T1, 1/T4.
t1
- skip points from the beginning of the waveform up to time t1, where t1 is time in seconds
from the beginning of the day. t1 is non negative integer number (t1 >= 0).
npts - kept npts seconds in waveform after skipping. npts is positive integer number (npts > 0).
5 K.M.W & J.R.D
Tuesday, January 17, 2016
Example
...> cut_trans_RESP 170.0 150.0 5.0 4.0 1000 83000
Input/Output data
=================
cut_trans_RESP uses input data from daily directories that had been created by the previous program
cv_sa_from_seed_holes_RESP and stores output in the same directories, but with different name. To
each waveform name program adds prefix ft_ . For example, if the file name was
AZ.MONP.LHZ.SAC , it is stored after processing under name ft_AZ.MONP.LHZ.SAC .
cut_trans_RESP also uses auxiliary RT (read only).
filter4 and whiten_rej_phamp programs
=====================================
filter4 and whiten_rej_phamp programs apply a set of data processing (filtering) procedures over all
individual SAC binary waveforms with the names ft_xxxxx that were output by cut_trans_RESP
program. Note, that filter4, whiten_rej_phamp and justCOR_mv_dir programs run in different work
space work2 with the new working directory year/b_month/5to150. To do that main script
cv_do_CO.csh creates new subdirectory 5to150 in current working directory year/b_month and will
start filter4, whiten_rej_phamp and justCOR_mv_dir programs in new working directory
year/b_month/5to150. Script copies station.lst and sac_db.out table in a new working directory, and
runs loop by days. In each loop cycle script creates a daily waveforms directory yyyy_M_D_0_0_0
and fills out with ft_xxxx.SAC waveforms from work space work1. After that the script goes to
yyyy_M_D_0_0_0 directory, runs filter4, runs whiten_rej_phamp program, and returns to working
directory.
Let us describe how to run filter4 and whiten_rej_phamp programs.
a) The program filter4 applies broadband filter and the global time
shift correction to SAC waveform files updating it in place.
Usage
=====
...> filter4 parameter_file
The parameter_files ASCII plane file, each line includes the following fields separated one or more
spaces: T1,T2,T3,T4, dt,npow, name
T1,T2,T3,T4 - corner periods in seconds, T1>T2>T3>T4>0, real
dt
- sampling step in seconds, real
npow
- power of cosine ends tapering, integer
6 K.M.W & J.R.D
Tuesday, January 17, 2016
name_file - the name of input/output ASCII binary SAC waveform file in working directory.
Input/Output
============
For each line from the parameter_file filter4 read file with name_file (ft_xxxx.SAC), makes filtering
according to the line parameters, makes the global time shift correction, and stores result as binary
SAC file by replacing input file by the new one.
b) Another program whiten_rej_phamp applies three data processing procedures
to every individual files obtained by filter4 program:
- temporal normalization or one-bit normalization
- spectral whitening
- notch correction for 26 sec period
Usage
=====
...> whiten_rej_phamp parameter_file
The parameter_files ASCII plane file, each line includes the following fields separated by one or
more spaces:
T1,T2,T3,T4,dt,npow,nwt,tnorm,fr1,fr2,nsmooth,onebit,patch,freqmin,name
where,
T1,T2,T3,T4 - corner periods in seconds, T1>T2>T3>T4>0
dt
- sampling step in seconds
npow - power of cosine ends tapering, integer
nwt
- half width of smoothing window in samples, integer
tnorm - one character "Y" or "N", to apply or not apply temporal normalization
fr1, fr2 - corner frequencies of temporal normalization in Hz , used if tnorm = Y
nsmooth - half width of smoothing window of temporal normalization in samples, integer, used
if tnorm = Y
onebit
- one character "Y" or "N", to apply or not apply one-bit normalization. We don't
recommend using one-bit normalization.
notch
- one character "Y" or "N", to apply or not apply notch correction for 26 sec period. Be sure
that this effect exists at your area of investigation.
freqmin - spectral amplitude damping for the notch correction,
0 < freqmin < 1.0. Smaller value of freqmin means stronger damping. freqmin = 0.5 is
recommended value.
name_file - the name of input ASCII binary SAC waveform file in working directory.
Input/Output
7 K.M.W & J.R.D
Tuesday, January 17, 2016
============
For each line whiten_rej_phamp reads name_file (ft_xxxx.SAC ) makes data processing and stores
temporary normalized waveform with the name of input file. Also, it creates and stores whitened
spectra files as a binary SAC files type of time series files. Real part of spectra has name
ft_xxxx.SAC.am and imaginary part has a name ft_xxxx.SAC.ph.
Example.
Input file: ft_TA.N06A.LHZ.SAC
Output files: ft_TA.N06A.LHZ.SAC ft_TA.N06A.LHZ.SAC.am
ft_TA.N06A.LHZ.SAC.ph
my_stack program
======================
my_stack is the last program to run. The program read auxiliary RT, creates station pairs, computes
for station pairs daily cross-correlations, and, finally, makes monthly cross-correlation stacking to
produce cross-correlation waveform files in binary SAC format. The resulting cross-correlation
database is stored under subdirectory COR.
2.0 AUTOMATIC FREQUENCY TIME ANALYSIS (AFTAN)
Rename Cross-Correlation files
Run in folder: stack
Run script: rename_cors.bash
# This scripts renames all cross-correlations
# to work with other codes moving forward
#
# OUTPUT: removed network name and renames file names to "COR_STN1_STN2.SAC"
# WARNING: If multiple stations with same name, files will be overwritten!
Stack acausal and causal parts of waveform to produce symmetric component
Code: yangch.c
Source Code Location: MAKE_SYM_WAVEFORM
Run in folder: stack
Usage: ls *.SAC > filelist; yangch
Splits COR_*.SAC files into positive (COR_*.SAC_p) and negative (COR_*.SAC_n) time lags, and
averages them together to create symmetric (COR_*.SAC_s) time series.
8 K.M.W & J.R.D
Tuesday, January 17, 2016
Compute signal to noise ratio for symmetric (COR*.SAC_s) cross-correlations at various
frequencies (uneven period distribution)
Code: spectral_snr_f_V2_lmw.c
Source Code Location: SPECTRAL_SNR/lin_rms
Run in folder: stack
Usage: ls *_s > 5to150_file_s.dat; spectral_snr_f_V2_lmw 5to150_file_s.dat
Output: COR_*.SAC_s_snr.txt (text files of SNR ratio at different periods). This also outputs empty
COR_*.SAC_p_snr.txt and COR_*.SAC_n_snr.txt files which can be deleted.
Calculate SNR ratios at periods of interest
Code: find_rms_per_v2.f
Source Code Location: SPECTRAL_SNR
Run in folder: stack
Usage: csh cal_SNR_all.csh
Outputs: 5to150_SNR_XX.dat where XX is period of interest defined in .csh script. Columns in out
are cross-correlation file name, SNR of positive time lag, negative time lag, and symmetric
component. (second and third column should equal zero).
Run FTAN analysis on symmetric cross-correlations
Code: aftani_c_pgl_test
Source Code Location: FTA
Run in folder: stack
Usage:
awk ‘{print “-1 1.5 5 5 100 20 1 0.5 0.2 2”,$1}’ 5to150_file_s.dat > FTA_para.dat; aftan_c_pgl_test
FTA_para.dat
Or you can run file called make_FTA_para.dat.csh which will do the same this as the awk command.
Necessary files: ref_avg_ph.dat: reference dispersion curve to ensure FTAN analysis chooses
fundamental mode
Outputs: COR*_1_DISP.1 (“unclean” FTAN results) and COR*_2_DISP.1 (“clean” FTAN results). See
Bensen et al. 2007 for difference between clean and unclean FTAN results. We generally use the clean
results (COR*_2_DISP.1).
Get all files and phase velocities with a SNR > 10
Code: choose_disp_s_ph.f
Source Code Location: GET_GOOD_SNR
Run in folder: stack
Usage: csh do_choose.csh
9 K.M.W & J.R.D
Tuesday, January 17, 2016
Outputs: This codes reads COR*_s_2_DISP.1 files to get phase velocities for a SNR ratio as defined in
the fortran code. Outputs codes called tomo_input_XX.dat where XX is the period of interest.
Remove obvious phase velocity outliers
Script: sort_tomo.bash
Run in folder: stack
Usage: bash sort_tomo.bash
Outputs: This scripts removes obvious outliers in phase velocities (removes all phase velocities <1.5
and > 5.0 km/s) and outputs new tomo_input_XX.dat_good files.
Make tomography formatted txt files
Code: tomo_input.f
Source Code Location: MAKE_TOMO_INPUT
Run in folder: stack
Usage: bash do_tomo_input.bash
Output: tomo_phvel_SNR_YY_XX.dat files where YY is the signal to noise ratio used in the
choose_disp_s_ph code and XX is the period of interest. These can be put into the Barmin
tomographic inversion.
3.0 2-D PHASE VELOCITY TOMOGRAPHIC INVERSION
Source Code Location: tomo/itomo_ra_shn
•
•
•
Copy directory “tomo” from “my_src” directory
cd into tomo and create directory “data”
Run in terminal: cd ..; cp stack/tomo_phvel_* tomo/data/
From now on, we will run out of the tomo directory.
To compile the code, run the Makefile in the itomo_ra_shn folder.
Copy the output itomo_ra_sp_cu_shn the tomo directory
Files to edit. contour.ctr
This file contains information about the model space for the inversion. It is important to follow the
pattern “snake” pattern below for coordinate locations:
0. 0.
4
18. 31.
47. 31.
47. 45.
# point outside model space
# of endpoints
# Coordinates of endpoints
10 K.M.W & J.R.D
Tuesday, January 17, 2016
18. 45.
4
1 2
2 3
3 4
4 1
# of endpoints
Running tomographic inversion:
Script: run_inversion.csh
Run in folder: tomo
File necessary to run script:
contour.ctr
model_map.ctr
rm_resid3_wavelength.bash
Brief Description: this script runs the tomographic inversion. It is broken up into 3 different parts:
1) Inversion of all paths
2) rm_resid3_wavelength.bash
a. This script removes residual outliers and cuts out paths that are shorter than the
wavelength criteria
3) Inversion of cleaned paths
In the run_inversion.csh script, you will need to set up your model region by specifying latitude range,
longitude range, and grid spacing (see modifiable parameters; see below). Other specifications can be
modified as well, but the values in the script provided are well suited for high-resolution ANT
studies.
Output: Folders for each period and each “alpha_sigma_beta” set with inversion results.
*.1
*.1_%_
*.azi
*.prot
*.rea
*.res
*.resid
File containing phase velocity results
Phase velocity perturbations relative to
a constant starting model
Estimates (by 2 different methods) of
azimuthal coverage
Inversion parameter file
Resolution analysis results
Path density estimates
Residual information
The tomographic inversion involves three inputs.
1) path file (output from do_tomo_input.bash)
2) Output name
3) Period
And runs as follows in the run_inversion.csh script:
./itomo_ra_sp_cu_shn path_file output_name period
Below are modifiable parameters:
11 K.M.W & J.R.D
Tuesday, January 17, 2016
A slightly newer inversion code is available from http://ciei.colorado.edu/Products/
We provide the manual for this newer code to better describe outputs of the tomographic inversion in
the tomo_code_src folder (tomo_1.1.pdf)
Plotting Results Scripts:
Make directory plot_maps in the tomo directory
Script: run_plot_region.bash
Run in folder: plot_maps
Other scripts/codes necessary:
1) plot_region.csh: GMT script for plotting results
2) gauss_filter: Gaussian smoother for resolution analysis
a. source code in tomo_code_src/plot_maps/GAUS_FILT
run_plot_region.bash is a looping script that makes maps by running plot_region.csh.
Things to change:
In run_plot_region.csh
• Make sure alpha, beta, sigma match the inversion parameters in previous step
• Make sure “name” matches output name of inversion
In plot_region.csh
This file is a relatively bare-boned plotting script to get first-order figures of the inversion results.
At the very least, you need to change the RANGE variable to your region of interest.
For a description of various GMT commands and their syntax, visit:
http://gmt.soest.hawaii.edu/doc/latest/
12 K.M.W & J.R.D
Tuesday, January 17, 2016
Download