Distribution and Properties of the ISM

advertisement
Explosions and Shock Waves
10 February 2003
Astronomy G9001 - Spring 2003
Prof. Mordecai-Mark Mac Low
Shock Waves in the ISM
•
•
•
•
•
•
•
supernova remnants
stellar wind bubbles
H II regions
stellar jets
spiral arms
accretion flows
thermal instability
point blast waves
Point Explosions
• Sedov’s dimensional analysis
– relevant physical variables are E, ρ0, R, t
– combine to find a dimensionless constant
 0   ML3  , E   ML2T 2  , R   L  , t  T 
cancelling M :
0
0 R5
5 2

  L T  , so
is a dimensionless number
2
E
Et
15
 E  25
 C  R  C  t .
2
Et
 0 
Detailed work gives C 2.02
0 R
5
Blast Wave Equation of Motion
Begin with shell momentum conservation equation
Cioffi,
McKee, and
Bertschinger
(1988)
d
 Mv2   4 R 2 P,
dt
4
where P  ESN V , M  V , V   R 3 ,
3
2
and post-shock velocity v2 
R. Then
 1

d
8
3
 R R   3ESN R 1.

dt  3   1

If we assume self-similarity, so that R  t ,
setting exponents of t on both sides equal gives:
3    1  1   , or  = 2 5
• The interior structure of an
adiabatic blast wave from a point
explosion has no intrinsic scale.
• Specification of the time or radius
determines the rest of the
structure.
• Non-dimensionalization of
equations of gas flow allows
derivation.
Ryu & Vishniac 1987
Similarity Solutions
Dynamical Overstability
ram
thermal
Vishniac
(1983)
Development and Saturation
• Vishniac overstability occurs (Ryu &
Vishniac 1987, 1988)
– for γ < 1.2 in blast waves
– for γ < 1.1 in shells
• Growth rate of t1/2
• Saturates when flows in shell become
supersonic (ML & Norman 1992)
• Transonic turbulence in shell
Experimental Verification
• Laser vaporization of foam target
• Nitrogen has γ=1.4 (adiabatic diatomic gas)
• Xenon has many lines, so can radiatively
cool with effective γ~1.05
N2
Grun et al. 1991 (PRL)
Xe
Nonlinear Thin Shell Instability
ram
ram
Vishniac
(1994)
• Nonlinear instability
• Displacements must be as large as layer thickness
• Occurs in shock-bounded layer if thin
Nonlinear Development
• Leads to steadily
thickening turbulent
layer (Blondin & Marks
1996)
Explosions in a Stratified
Medium
• Explosions in exponentially stratified
medium formally reach infinite velocity in
finite time (Kompaneets 1959)
• Explosions in medium with ρ~1/r2 expand
at constant rate. In steeper power laws they
accelerate, in shallower they decelerate (see
Ostriker & McKee 1988, Koo & McKee 1990)
Rayleigh-Taylor instabilities in shells
stable
unstable
g
g
decelerating
accelerating
v
v
g
g
v
g
v
g
Supernova Remnants
free expansion
• developmental
stages
Sedov Solution
• numerical
(adiabatic)
solution (Cioffi,
pressure-driven
McKee &
snowplow
Bertschinger
1988)
(cooled shell)
mntm-driven
snowplow
(cooled interior)
R t
R  t2/5
R  (t - t0)3/10
not R  t2/7
R  t1/4
Stellar Wind Bubbles
• Double shock structure, separated by a
contact discontinuity
• Outer shell quickly
cools, interior only
cools if very dense
• Hot interior = pressure
driven (R~t3/5)
• Cold interior = mntm
driven (R~t1/2)
Stellar Wind Bubbles
• Similarity solution for shell structure (Castor,
McCray & Weaver 1976, Weaver et al. 1977)
• Interior dominated by conductive
evaporation
T  Tc 1  r / R 
2/5
  c 1  r / R 
T
ρ
2 / 5
shell
center
Hot ISM
• To explain observed hot medium, consider
filling factor of supernova remnants
• Cox & Smith (1975), McKee & Ostriker (1977)
• How to compute expansion of SNRs in
clumpy, inhomogeneous medium?
• MO77 assumed dense, round clouds
embedded in low-density intercloud gas
• SNRs expand quickly through low-density
gas, so they found very high filling factors.
Multiple Supernovae:
Superbubbles
• Most Type II SNe from massive stars occur
in OB associations
• Later SNe occur within earlier SNRs
• Later blast waves quickly decelerate to
sound speed of hot interior, maintaining
pressure (ML & McCray 1988)
• Multiple SNR expands like wind-driven
bubble with Lmech=NSNESN/ΔtSN
Multiple Supernovae: Turbulence
ML, Avillez, Balsara, Kim 2001, astro-ph
• Computational
model of disk
–
–
–
–
–
1 x 1 x 20 kpc2
SN driving
vertical strat.
1.25 pc res
radiative cooling
Galactic Fountain
• Hot gas in plane must rise
• Shapiro & Field (1976) computed consequences
– Gas at 106 K allowed to radiatively cool
– Incorporating self-photoionization gives good match to
column densities of C IV, N V, and O VI (Shapiro &
Benjamin 1991)
• Result of cooling shown numerically to be
falling, dense clouds (Avillez 2000)
Starburst winds
• With sufficiently high star formation (and
SN) rate, hot gas entirely escapes potential
• X-ray and Hα emission observed many kpc
away from starburst galaxies
• Winds may energize, pollute nearby IGM,
but can’t sweep away rest of ISM (ML &
Ferrara 1999, Fujita et al. in prep)
– winds accelerate down steepest density gradients
– far more energy required to sweep ISM than just the
gravitational binding energy suggested by Dekel & Silk
(1986)
Where to go next?
• Current plan is to devote one more week to
ZEUS-3D, examining MHD problems
• Then spend similar amounts of time on
– Flashcode (AMR, Riemann solver, MPI)
– GADGET (SPH, self-gravity methods, MPI)
– Cloudy (photoionization computations)
• Alternatives:
– spend more time on ZEUS-3D
– study ZEUS-MP as an example of an MPI code
instead of one of the other codes (based on
ZEUS-3D so some things familiar)
Multidimensional Computations
• Directional splitting
– XYZ XZY YZX...
• Centering
– velocities are face centered, not edge centered
Different coordinate systems
• Non-cartesian rectilinear coordinate systems
in ZEUS: all difference equations in
covariant form.
 h12

gij   0
0

0
h22
0
0
1 fi

0  , f  
i hi xi
2
h3 
 (1,1,1)

 h1 , h2 , h3    1,1, r 
1, r , r sin  

 x, y , z 
 z, r , 
 r , , 
Ratioed grids
• ZEUS includes ratioed grids (see sample prob).
– add multiple ggen namelists
– set lgrid=.f. until last one, then .t.
– x1rat=1.03 is a typical value
• To compute grid sizes xmax  xmin
n
 xrat
1
 dxmin 

 xrat  1 
• Best not to exceed 10:1 zone aspect ratioes.
– dxmax=(dxmin)n
Parallelization
• Additional issues:
– How to coordinate multiple processors
– How to minimize communications
• Common types of parallel machines
– shared memory, single program
• eg SGI Origin 2000, dual or quad proc PCs
– multiple memory, multiple program
• eg Beowulf Linux clusters, Cray T3E, ASCI systems
Shared Memory
• Multiple processors
share same memory
• Only one processor
can access memory
location at a time
• Synchronization by
controlling who
reads, writes shared
memory
U of Minn Supercomputing Inst.
Shared Memory
• Advantages
– Easy for user
– Speed of memory access
• Disadvantages
– Memory bandwidth limited.
– Increase of processors without increase of
bandwidth will cause severe bottlenecks
Distributed Memory
• Multiple processors with private memory
• Data shared across network
• User responsible for synchronization
U of Minn Supercomputing Inst.
Distributed Memory
• Advantages
– Memory scalable with number of processors. More
processors, more memory.
– Each processor can read its own memory quickly
• Disadvantages
– Difficult to map data structure to memory
organization
– User responsible for sending and receiving data
among processors
• To minimize overhead, data should be
transferred early and in large chunks.
Methods
• Shared memory
– data parallel
– loop level parallelization
• Implementation
– OpenMP
– Fortran90
– High Performance
Fortran (HPF)
• Examples
– ZEUS-3D
• Distributed memory
– block parallel
– tiled grids
• Implementation
– Message Passing Interface (MPI)
– Parallel Virtual Machine (PVM)
• Examples
– ZEUS-MP
– Flashcode
– GADGET
OpenMP
• Designate inner loops that can be distributed
across processors with DOACROSS command.
• Dependencies between loop instances prevent
parallelization
• Execution of each loop usually depends on
values from neighboring parts of grid.
• ZEUS-3D only parallelizes out to 8-10
processors with OpenMP
Cache Optimization
• Modern processors retrieve 64 bytes or more at
a time from main memory
– However it takes hundreds of cycles
• Cache is small amount of very fast memory on
microprocessor chip
– Retrievals from cache take only a few cycles.
• If successive operations can work on cached
data, speed much higher
– Fastest changing array index should be inner loop,
even if code rearrangement required
Parallel ZEUS-3D
• To run ZEUS-3D in parallel, set the variable
iutask = 1 in setup block, recompile.
– inserts DOACROSS directives
– compiles with parallel flags turned on if OS
supports them.
• Set the number of processors for the job
(usually with an environment variable)
• Run is otherwise similar to serial.
Use of IDL
• Quick and dirty movies
pause
for i=1,30 do begin & $
a=sin(findgen(10000.)) & $
hdfrd,f=’zhd_’+string(i,form=’(i3.3)’)+’aa’,d=d,x=x & $
plot,x,d[4].dat & end
• Scaling, autoscaling, logscaling 2D arrays
tvscl,alog(d)
tv,bytscl(d,max=dmax,min=dmin)
• Array manipulation, resizing
tvscl,rebin(d,nx,ny,/s) ; nx, ny multiple
tvscl,rebin(reform(d[j,*,*]),nx,ny,/s)
More IDL
• plots, contours
plot,x,d[i,*,k],xtitle=’Title’,psym=-3
oplot,x,d[i+10,*,k]
contour,reform(d[i,*,*]),nlev=10
• slicer3D
dp = ptr_new(alog10(d))
slicer3D,dp
• Subroutines, functions
Download