From PDEs to Information Science and Back

advertisement
From PDEs to Information Science
and Back
Russel Caflisch
IPAM
Mathematics Department, UCLA
1
2013 SIAM Great Lakes Section
Collaborators & Support
• UCLA
Stan Osher
Hayden Schaeffer
• Oak Ridge National Labs
Cory Hauck
2
2013 SIAM Great Lakes Section
Themes
• Over the last 20 years, there has been a lively
influx of ideas and techniques from analysis and
PDEs into information science
– e.g., image processing
• Rapid progress in information science has
produced wonderful mathematics
– e.g., compressed sensing
• Ideas and techniques from info science are starting
to be used for PDEs
3
2013 SIAM Great Lakes Section
Information Science
From Wikipedia
Information science (or information studies) is an
interdisciplinary field primarily concerned with the
analysis, collection, classification, manipulation,
storage, retrieval, movement, and dissemination of
information.
This talk will focus on the analysis and manipulation of
data in the form of images and signals.
4
2013 SIAM Great Lakes Section
Image Denoising
5
2013 SIAM Great Lakes Section
Image Denoising
• Removing noise from an image
• Example
– From “Image Processing and Analysis: Variational, PDE, Wavelet
and Stochastic Methods” by T. Chan and J. Shen (2005)
6
2013 SIAM Great Lakes Section
Denoising by Weiner Filter
• Noise is random or has rapid oscillations
– So it can be canceled by local averaging
• Describe the original noisy image as a
function u0  u0 ( x) : R2  R
– x is position and u is gray scale.
• The Wiener filter transforms u to
uW ( x )  K  u0 ( x )   K ( y )u0 ( x  y )dy
1  y 2 /2
K ( y )  (2 ) e
7
2013 SIAM Great Lakes Section
Weiner Filter as a PDE
• The Wiener filter transforms u to
uW ( x )  K  u0 ( x )   K ( y )u0 ( x  y )dy
1  y 2 /2
K ( y )  (2 ) e
• Kt ( x) is the fundamental solution for the heat
equation, i.e.,  t K   2 K
K t 0 ( x )   ( x )
• Therefore uw ( x)  u( x, t   ) in which
 t u   2u
u  u0
at t  0
8
2013 SIAM Great Lakes Section
Denoising by Rudin-Osher PDE
• Variational principle for noise removal
– L. Rudin, S. Osher and E. Fatemi (1992)
– For noisy image u0, the denoised image u minimizes
V (u, u0 )   u dx    (u  u0 )2 dx
• Gradient descent tu  uV is the nonlinear
parabolic PDE
 u 
tu    
    u  u0 
 u 
  xu 
  yu 
 x 
  y 
    u  u0 
 u 
 u 
9
2013 SIAM Great Lakes Section
Significance of Rudin-Osher
• The Rudin-Osher variational principle
V (u, u0 )   u dx    (u  u0 )2 dx
– λ is a Lagrange multiplier
– u minimizes u TV   u dx , for constant value of
u  u0
2
2
L
  (u  u0 )2 dx
• u TV   u dx measures total variation (TV) of u
– TV used for nonlinear hyperbolic PDEs
– Promotes steep gradients, as in shock waves and edges
– Edges are dominant feature of images
10
2013 SIAM Great Lakes Section
Comparison of Rudin-Osher to Wiener
• Rudin-Osher variational principle
V (u, u0 )   u dx    (u  u0 )2 dx
– L2 alternative
VW (u, u0 )   u dx    (u  u0 )2 dx
2
leads to heat equation (with lower order terms),
tu   u   u  u0 
2
almost the same as the PDE for Wiener filtering
• Rather than promoting edges like Rudin-Osher,
– Wiener filtering smooths gradients
11
2013 SIAM Great Lakes Section
Results for Rudin-Osher vs. Wiener
Rudin, Osher, Fatemi
(1992)
12
2013 SIAM Great Lakes Section
Extensions of the Variational Approach
to Imaging Applications
• Segmentation
• Inpainting
• Texture
13
2013 SIAM Great Lakes Section
Image Segmentation
• Find boundaries Γ of objects in image region Ω
– Active contour model: given image u, Γ minimizes
E (  | u)   length( )   
\ 
 u( x)  m( x | )  dx
2
– m( x | ) = average of u inside each component of Γ
– T. Chan and L. Vese (2001)
• Earlier variational principle of Mumford-Shah
14
2013 SIAM Great Lakes Section
Image segmentation by Active
Contour Model
Chan, Vese (2001)
15
2013 SIAM Great Lakes Section
Image Inpainting
• Extend image to region D   where info is
missing
– TV inpainting model: given image u0 and region D,
inpainted image u minimizes
u  u0 

\ D
V (u | u0 , D)   u dx   

2
dx
– Information in D found by continuing in from boundary Γ
– T. Chan and J. Shen (2002)
16
2013 SIAM Great Lakes Section
TV Impainting
Chan, Shen (2005)
17
2013 SIAM Great Lakes Section
Texture
• Texture is regular, oscillatory features in image
– Y. Meyer (2001), texture should average to 0, so that
it belongs in the dual of BV
– Image model:
u0  u  v  w
v   g
– With u = regular component, including contours
v = oscillatory texture component
w = oscillatory, unattractive noise component
– Variational principle: For image u0, chose u, g to minimize
V (u, g | u0 )    u dx   g

    u0  u    g  dx
2


18
2013 SIAM Great Lakes Section
Example of Texture
Bertalmio, Vese, Sapiro, Osher (2003)
19
2013 SIAM Great Lakes Section
Inpainting of Texture
Bertalmio, Vese, Sapiro, Osher (2003)
20
2013 SIAM Great Lakes Section
New Methods
in Information Science
• Wavelets
• Sparsity and Compressed Sensing
21
2013 SIAM Great Lakes Section
Wavelets
• Wavelets
–
–
–
–
–
An orthonormal basis
Indexed by position and scale (~ wavenumber)
Based on translation and scaling of a single function
Easy forward and inverse transforms
Localized in both x and k
• Invention
– Wavelet transform developed: Morlet 1981
– Nontrivial wavelet basis:Yves Meyer 1986
– Compact and smooth wavelets: Daubechies 1988
22
2013 SIAM Great Lakes Section
Sparsity
• Sparsity in datasets (e.g., sensor signals)
– Signal x  R N which is “m-sparse”, with m  N
– i.e., x has at most m non-zero components
– n measurements of x, corresponds to
f  Ax  R
n
A is n  N
• Objectives
– How many measurements are required?
• What is the value of n?
– How hard is it to compute x?
• Tractable or intractable?
23
2013 SIAM Great Lakes Section
Compressed Sensing
• Compressed sensing 2006
– David Donoho
– Emmanuel Candes, Justin Romberg & Terry Tao
24
2013 SIAM Great Lakes Section
Compressed Sensing
• Problem statement
– Find x that is m-sparse and solves Ax = f
– Assuming that an m-sparse solution exists
• Standard methods
min x 0 subject to constraint Ax = f
– note x 0  #i : xi  0
• Compressed sensing
min x 1 subject to constraint Ax = f
N
– note x 1   xi
i 1
25
2013 SIAM Great Lakes Section
How many measurements
are required?
• For m << N, find m-sparse solution x  R of
Ax  f  Rn A is n  N
N
• Standard methods require: n = N
– #(equations)=#(unknowns)
• Compressed sensing:
n = m (log N)
– n << N. Many fewer equations than unknowns!
– Solution is exact with high probability!
• Reduced isometry property (RIP)
– convex programming
26
2013 SIAM Great Lakes Section
How hard is it to compute x?
• Standard methods: NP hard = intractable
• Compressed sensing: tractable and fast
• convex programming
27
2013 SIAM Great Lakes Section
Why Does L1 Promote Sparsity?
• Compressed sensing
min x 1 subject to constraint Ax = f
• Two simplified problems: Find x in R2 solving
– 1. min x 1 subject to constraint a1x1  a2 x2  f
– 2. min ( x  y)   x 1 for given y in R2
2
28
2013 SIAM Great Lakes Section
Version 1: Geometric solution
• Find x on line a1x1  a2 x2  f with smallest x 1
– For all but 45° lines, L1 norm is smallest at a vertex.
• Vertices are sparse points, since a component is 0.
• Works in higher dimension
29
2013 SIAM Great Lakes Section
Version 2: Analytic Solution
• Given y, find x that minimizes
( x  y )2   x 1  ( xi  yi )2   xi 
N
i 1
– Minimum x has each component xi minimizing
( xi  yi )   xi
2
– Exercise: Show that the minimum is
 yi   if yi  

xi  S yi   0
if yi  
 y   if y  
 i
i
• Operator Sλ is “soft-thresholding”
30
2013 SIAM Great Lakes Section
Soft Thresholding
x=Sλy
y
 y   if y  

x  S y   0
if y  
 y   if y  

31
2013 SIAM Great Lakes Section
Applications of Information
Science to PDEs
• Wavelets for turbulence
• Sparsity for PDEs
32
2013 SIAM Great Lakes Section
Wavelets for Turbulence
• Turbulent solutions of the incompressible
Navier-Stokes equations
– Marie Farge and co-workers
– Transformed velocity into wavelet basis
– Deleted wavelet components with small coefficients
to get “coherent part”.
• In 2D 2562 computation, 0.7% of wavelet coefficients retain
99.2% of energy and 94% of enstrophy. Farge, et al. 1999
• In 3D 2563 computation, 3% of wavelet coefficients retain 99% of
energy and 75% of enstrophy. Okamoto, et al. 2007.
33
2013 SIAM Great Lakes Section
Vorticity in 2D Turbulence
Total field
Coherent part
Incoherent part
Farge, Schneider & Kevlahan 1999
34
2013 SIAM Great Lakes Section
Vorticity in 3D Turbulence
Okamoto, Yoshimatsu,
Schneider, Farge, Kaneda
2007
35
2013 SIAM Great Lakes Section
Sparsity for PDEs
• PDE t u  A[u]
– Schaeffer, Osher, Caflisch & Hauck, 2013
– Apply soft-thresholding Sλ to Fourier coefficients
• λ = c Δt2
• Alternatives to Fourier (e.g., framelets) now being used
– Promotes sparsity
• How should soft-thresholding be used?
• Discretization in time, un= u(tn= n Δt)
un 1  un  t A[un ]
F= Fourier transform
un 1  F 1  S F [un 1 ]
36
2013 SIAM Great Lakes Section
Examples for Sparse PDEs Solver
• Schaeffer, Osher, Caflisch & Hauck, 2013
• Examples
– Convection eqtn with rapidly varying coefficients
– Parabolic eqtn with rapidly varying coefficients
– Viscous Burgers eqtn with rapidly varying
convection term
– 2D Navier-Stokes vorticity equation, with rapidly
oscillatory forcing
 t      1        f
• f is rapidly oscillating in x, constant in t
2013 SIAM Great Lakes Section
37
Sparse Solution of 2D Navier Stokes for
Interacting Vortices with Oscillatory Forcing
Schaeffer, Osher, Caflisch & Hauck, 2013
2013 SIAM Great Lakes Section
38
Possible Future Directions
• Texture in solutions of PDEs
– Proposed model for “incoherent component of
vorticity” by Farge
• Combination of network-based models, datadriven models and continuum models.
• Empirical mode decomposition (EMD)
– Norden Huang, Tom Hou, Nathan Kutz
• Machine learning for many applications
– Klaus Muller for materials
• Many possibilities!
39
2013 SIAM Great Lakes Section
Download