Group 5

advertisement
Dynamic Bayesian Networks (DBNs)
Dave, Hsieh Ding Fei
Frank, Yip Keung
Outline


Introduction to DBNs
Inference in DBNs





Type of inference
Exact inference
Approximate inference
Applications
Conclusion
Introduction to DBNs

Motivation

Bayesian Network (BN) Models
Static nature of the problem domain
 Observable quantity is observed once for all
 Confidence in the observation is true for all time


DBN
Domains involving repeated observations
 Process dynamically evolves over time
 Examples: Monitoring a patient, traffic monitoring,
etc.

Introduction to DBNs

Assumptions

The process is modeled as discrete time-slice
At time 1, state is X(1) , at time t, state is X(t)
 P(X(1),…, X(t))=P(X(1)) P(X(1)|X(2) )…P(X(t)|X(1),…, X(t-1))


Markov property
Given current state, the next state is independent of
previous states
 P(X(1),…, X(t))=P(X(1)) P(X(1)|X(2) )…P(X(t)|X(t-1))

Introduction to DBNs

DBN model (DAG representation)



Edge means how tight the coupling is between
nodes
Effect is immediateedge within same time slice
Effect is long termedge between time slices
Introduction to DBNs

Special case of DBN  HMM


State of HMM evolves in a Markovian way
Model HMM as a simple DBN

Each time slice contains two variables which are state
q and observation o
Inference

Type of Inference

Prediction


Given a probability distribution over current state,
predict the distribution over future states
Monitoring
Given the observation (evidence) in every time slice t,
maintain the distribution over the current state
 Belief state at time T P(X(T) | o(1) ,…, o(T))

Inference

Probability Estimation
Given a sequence of observations in every time slice t,
determine the distribution over each intermediate state
 P(X(t) | o(1) ,…, o(T)) for t = 1, 2, … , T


Explanation

Given an initial state and a sequence of observations
o(1) ,…, o(T), determine the most likely sequence of
states X(1) ,…, X(T)
Exact inference


For most inference tasks, a belief state need to be
maintained
belief state



A probability distribution over the current state
This state summarize all information about
history
Need to be maintained compactly
Exact inference


How to accomplish exact inference
How to do this in a simple DBN  HMM



Given a number of time slices, the DBN is just a
very long BN with regular structure
Standard Bayesian network algorithms can be
used
Probability estimation task
Clique tree propagation algorithm
 Forward-backward algorithm

Exact inference

Monitoring task


Explanation task


Only the forward pass of forward-backward
algorithm
Viterbi’s algorithm
Prediction task

Only base on the current belief state because it
already have the history information
Exact inference

dHugin : an exact inference computational
system


Inference method of classical discrete time-series
analysis
Allows discrete multivariate dynamic system
dHugin

introduce notion of dynamic time window
Contain several time slice and represent by junction tree
 Operations: window expansion and reduction
 Expand window to perform forecasting
 Inference are formulated in terms of message passing in
junction tree

dHugin

Window expansion
1.
2.
3.
4.
5.
Move k new consecutive time slices to the forecast model
Move the k oldest time slices of the forecast model to the time
window
Moralize the compound graph including the graph in window and
the new k slices
Triangulate the time window
Construct new junction tree
dHugin


Window reduction
Suppose has k+1 time slices in time window
1.
2.
make the k oldest slices in time window become
k backward smoothing models
The remain (k+1)’st slices is the new time
window
Forecasting


Calculate estimates of the distributions of future
variables given past observations and present
variables
Forecasting within window


Propagation
Forecasting beyond the window
1.
2.
A series of alternating expansion and reduction
step
Propagation performed in each step
Problem of Exact inference


Drawback: complex and require large space for computations
Key issue is how to maintain the belief state

Represent it naively


Require an exponential number of entries
Cannot represent it compactly by exploiting the structure



no conditional independence structure
Variables becomes correlated each other when time goes on
Prevent using factorization ideas
Not even
conditionally
independent
within this time
slice
Approximate Inference

Objective


Try to maintain and propagate an approximate
belief state when the state space is very large in
dynamic process
It improves the complexity of probabilistic
inference
Approximate Inference

Two approaches

Structural approximation


Ignore weak correlations between variables in a belief
state
Stochastic simulation

Randomly sample from the states in the belief state
Structural Approximation

Problems in exact inference



All variables in a belief state are correlated
Belief state is expressed as full joint distribution
 Need exponential number of table entries
Objective of structural approximation

Use factorization in order to represent complex
system compactly by exploiting the fact that
each variable has weak interaction with each
other
Structural Approximation

Example (monitor a freeway with multiple cars)




States of different cars (e.g velocity,location..etc)
become correlated after a certain period of time
Approximation is to assume that the correlations
are not very strong
Each car can be considered as independent
The approximate belief state can be represented
in a factorized way, as a product of separate
distributions, one for each car
Structural Approximation

We can define a set of disjoint clusters Y1,…, Yk
such that Y = Y1  Y2  …  Yk . We maintain
an approximate belief state :
ˆt (Y )  ˆt (Y )
(t )
i
i

If this approximate belief state of time t is simply
propagated forward to time t+1, all variables
would become correlated again
Structural Approximation

It can be solved by executing the below process


At each time t, we take ˆt and propagate it to
time t+1, obtain a new distribution ~t 1
~

Approximate t 1 using independent marginal
Compute ~t  1(Yi (t 1) ) for every I
~
~
( t 1)
 Ie.  t  1(Y ( t 1) ) 

t

1
(
Y
)

i

Y ( t 1) / Yi ( t 1)

The product of each marginal is ˆt  1
Structural Approximation

Two sources of error



The accumulated error results from propagation
~
The error results from approximation of t 1
Errors are bounded due to two opposing forces


Propagation from time t to time t+1 adds noise to
exact and approximate belief state
 reduce difference between them
 reduce error
Approximation  increase error
Stochastic Simulation

Likelihood Weighting (LW)


Find the approximate belief state using sampling
Algorithm of LW
Stochastic Simulation

Drawback




LW generates the samples at time t according to
prior distribution (depends on condition of
samples at time t-1)
Observation affects the weights, but not the
choice of samples
Samples generated get increasingly irrelevant
when time grows as some samples are not likely
to happen to explain the current observation
Example of monitoring car’s location
Stochastic Simulation


Samples at t = 5 are more distributed, far away from
exact location of vehicle
An improved algorithm called Survival-Of-Fittest is
used
Stochastic Simulation


Survival-Of-Fittest (SOF)
 Propagate likely samples more often than
unlikely samples
Algorithm of SOF
Stochastic Simulation
Belief state propagation over time
(a) exact belief state
(b) belief state by using LW
(b) belief state by using SOF
Application

Robot localization


Track a robot moving around in an environment
State variables
x, y location
 Orientation


Transition model corresponds to motion


Next position is a Gaussian around a linear function of
current position
Observation model

Probability that sonar detect an obstacle
Conclusion


Concept DBNs
Inference in DBNs


Four types of inference
Exact inference


Approximate inference




dHugin
Structural approximation
Search –based
Stochastic simulation
Applications

robot localization
Download