Inference in Graphical Models Variable Elimination and Message Passing Algorithm Le Song

advertisement
Inference in Graphical Models
Variable Elimination and Message Passing Algorithm
Le Song
Machine Learning II: Advanced Topics
CSE 8803ML, Spring 2012
Conditional Independence Assumptions
Local Markov Assumption
Global Markov Assumption
𝑋 ⊥ π‘π‘œπ‘›π‘‘π‘’π‘ π‘π‘’π‘›π‘‘π‘Žπ‘›π‘‘π‘‹ |π‘ƒπ‘Žπ‘‹
𝐴 ⊥ 𝐡|𝐢, 𝑠𝑒𝑝𝐺 𝐴, 𝐡; 𝐢
π‘π‘œπ‘›π‘‘π‘’π‘ π‘π‘’π‘›π‘‘π‘Žπ‘›π‘‘π‘‹
π‘ƒπ‘Žπ‘‹
𝑋
Undirected Tree
Undirected Chordal
Graph
𝐡𝑁 𝑀𝑁
𝐴
𝐢
𝐡
𝑃
Moralize
𝑀𝑁
𝐡𝑁
Triangulate
2
Distribution Factorization
Bayesian Networks (Directed Graphical Models)
𝐼 − π‘šπ‘Žπ‘: 𝐼𝑙 𝐺 ⊆ 𝐼 𝑃
⇔
𝑛
𝑃(𝑋1 , … , 𝑋𝑛 ) =
Conditional
Probability
Tables (CPTs)
𝑃(𝑋𝑖 | π‘ƒπ‘Žπ‘‹π‘– )
𝑖=1
Markov Networks (Undirected Graphical Models)
π‘ π‘‘π‘Ÿπ‘–π‘π‘‘π‘™π‘¦ π‘π‘œπ‘ π‘–π‘‘π‘–π‘£π‘’ 𝑃, 𝐼 − π‘šπ‘Žπ‘: 𝐼 𝐺 ⊆ 𝐼 𝑃
Clique
⇔
π‘š
Potentials
1
𝑃(𝑋1 , … , 𝑋𝑛 ) =
Ψ𝑖 𝐷𝑖
𝑍
Maximal
𝑖=1
Normalization
(Partition
Function)
π‘š
𝑍 =
π‘₯1 ,π‘₯2 ,…,π‘₯𝑛
Ψ𝑖 𝐷𝑖
Clique
𝑖=1
3
Inference in Graphical Models
Graphical models give compact representations of probabilistic
distributions 𝑃 𝑋1 , … , 𝑋𝑛 (n-way tables to much smaller tables)
How do we answer queries about P?
We use inference as a name for the process of computing
answers to such queries
4
Query Type 1: Likelihood
Most queries involve evidence
Evidence 𝑒 is an assignment of values to a set 𝐸 variables
Evidence are observations on some variables
Without loss of generality 𝐸 = π‘‹π‘˜+1 , … , 𝑋𝑛
Simplest query: compute probability of evidence
𝑃 𝑒 =
π‘₯1 …
π‘₯π‘˜ 𝑃(π‘₯1 , … , π‘₯π‘˜ , 𝑒)
This is often referred to as computing the likelihood of 𝑒
Sum over this
set of variables
𝐸
5
Query Type 2: Conditional Probability
Often we are interested in the conditional probability
distribution of a variable given the evidence
𝑃 𝑋, 𝑒
𝑃(𝑋, 𝑒)
𝑃 𝑋𝑒 =
=
𝑃 𝑒
π‘₯ 𝑃(𝑋 = π‘₯, 𝑒)
It is also called a posteriori belief in 𝑋 given evidence 𝑒
We usually query a subset Y of all variables 𝒳 = {π‘Œ, 𝑍, 𝑒} and
“don’t care” about the remaining 𝑍
𝑃 π‘Œπ‘’ =
𝑃(π‘Œ, 𝑍 = 𝑧|𝑒)
𝑧
Take all possible configuration of 𝑍 into account
The processes of summing out the unwanted variable Z is called
marginalization
6
Query Type 2: Conditional Probability Example
Interested in the
conditionals for
these variables
Sum over this
set of variables
𝐸
Interested in the
conditionals for
these variables
𝐸
Sum over this
set of variables
7
Application of a posteriori Belief
Prediction: what is the probability of an outcome given the
starting condition
The query node is a descendent of the evidence
𝐴
𝐡
𝐢
Diagnosis: what is the probability of disease/fault given
symptoms
The query node is an ancestor of the evidence
𝐴
𝐡
𝐢
Learning under partial observations (Fill in the unobserved)
Information can flow in either direction
Inference can combine evidence from all parts of the networks
8
Query Type 3: Most Probable Assignment
Want to find the most probably joint assignment for some
variables of interests
Such reasoning is usually performed under some given
evidence 𝑒, and ignoring (the values of other variables) 𝑍
Also called maximum a posteriori (MAP) assignment for π‘Œ
𝑀𝐴𝑃 π‘Œ 𝑒 = π‘Žπ‘Ÿπ‘”π‘šπ‘Žπ‘₯𝑦 𝑃 π‘Œ 𝑒 = π‘Žπ‘Ÿπ‘”π‘šπ‘Žπ‘₯𝑦 𝑧 𝑃 π‘Œ, 𝑍 = 𝑧 𝑒
Interested in the
most probable
values for these
variables
Sum over this
set of variables
𝐸
9
Application of MAP assignment
Classification
Find most likely label, given the evidence
Explanation
What is the most likely scenario, given the evidence
Cautionary note:
The MAP assignment of a variable dependence on its context –
the set of varibales being jointly queried
Example:
MAP of 𝑋, π‘Œ ?
(0, 0)
MAP of 𝑋?
1
X
Y
P(X,Y)
X
P(X)
0
0
0.35
0
0.4
0
1
0.05
1
0.6
1
0
0.3
1
1
0.3
10
Complexity of Inference
Computing the a posteriori belief 𝑃 𝑋 𝑒 in a GM is NP-hard in
general
Hardness implies we cannot find a general procedure that
works efficiently for arbitrary GMs
For particular families of GMs, we can have provably efficient
procedures
eg. trees
For some families of GMs, we need to design efficient
approximate inference algorithms
eg. grids
11
Approaches to inference
Exact inference algorithms
Variable elimination algorithm
Message-passing algorithm (sum-product, belief propagation
algorithm)
The junction tree algorithm
Approximate inference algorithms
Sampling methods/Stochastic simulation
Variational algorithms
12
Marginalization and Elimination
A metabolic pathway:
What is the likelihood protein 𝐸 is produced
𝐡
𝐴
𝐷
𝐢
𝐸
Query: P(E)
𝑃 𝐸 =
𝑑
𝑐
𝑏
π‘Žπ‘ƒ
π‘Ž, 𝑏, 𝑐, 𝑑, 𝐸
Using graphical models, we get
𝑃 𝐸 =
𝑑
𝑐
𝑏
π‘Žπ‘ƒ
π‘Ž)𝑃 𝑏 π‘Ž 𝑃 𝑐 𝑏 𝑃 𝑑 𝑐 𝑃(𝐸|𝑑
Naïve summation needs to
enumerate over an
exponential number of terms
13
Elimination in Chains
𝐴
𝐡
𝐷
𝐢
𝐸
Rearranging terms and the summations
𝑃 𝐸
=
𝑃 π‘Ž)𝑃 𝑏 π‘Ž 𝑃 𝑐 𝑏 𝑃 𝑑 𝑐 𝑃(𝐸|𝑑
𝑑
𝑐
𝑏
=
π‘Ž
𝑃 𝑐𝑏 𝑃 𝑑𝑐 𝑃 𝐸𝑑
𝑑
𝑐
𝑏
𝑃 π‘Ž 𝑃 π‘π‘Ž
π‘Ž
14
Elimination in Chains (cont.)
𝐡
𝐴
𝐷
𝐢
𝐸
𝑃(𝑏)
Now we can perform innermost summation efficiently
𝑃 𝐸
=
𝑃 𝑐𝑏 𝑃 𝑑𝑐 𝑃 𝐸𝑑
𝑑
𝑐
𝑏
=
𝑃 π‘Ž 𝑃 π‘π‘Ž
π‘Ž
𝑃 𝑐 𝑏 𝑃 𝑑 𝑐 𝑃 𝐸 𝑑 𝑃(𝑏)
𝑑
𝑐
𝑏
Equivalent to matrix-vector
multiplication, |Val(A)| * |Val(B)|
The innermost summation eliminates one variable from our
summation argument at a local cost.
15
Elimination in Chains (cont.)
𝐡
𝐴
𝑃(𝑏)
𝐢
𝐸
𝐷
𝑃(𝑐)
Rearranging and then summing again, we get
C B
𝑃 𝐸
=
𝑃 𝑐 𝑏 𝑃 𝑑 𝑐 𝑃 𝑒 𝑑 𝑃(𝑏)
𝑑
𝑐
=
𝑐
=
1
B
0
0
0 .15 0.35
0
0 .25
1
0.85
1
0.75
0.65
𝑏
𝑃 𝑑𝑐 𝑃 𝐸𝑑
𝑑
0
𝑃 𝑐𝑏 𝑃 𝑏
𝑏
𝑃 𝑑 𝑐 𝑃 𝐸 𝑑 𝑃(𝑐)
𝑑
𝑐
Equivalent to matrix-vector
multiplication, |Val(B)| * |Val(C)|
16
Elimination in Chains (cont.)
𝐡
𝐴
𝑃(𝑏)
𝐢
𝐷
𝐸
𝑃(𝑐)
Eliminate nodes one by one all the way to the end
𝑃 𝐸 =
𝑃 𝐸 𝑑 𝑃(𝑑)
𝑑
Computational Complexity for a chain of length π‘˜
Each step costs O(|Val(𝑋𝑖 )| * |Val(𝑋𝑖+1 )|) operations: O(π‘˜π‘›2 )
Ψ π‘‹π‘– =
π‘₯𝑖−1 𝑃
𝑋𝑖 𝑋𝑖−1 )𝑃(𝑋𝑖−1 )
Compare to naïve summation: O(π‘›π‘˜ )
π‘₯1 …
π‘₯π‘˜−1 𝑃(π‘₯1 , … , π‘‹π‘˜ )
17
Undirected Chains
𝐡
𝐴
𝐢
𝐸
𝐷
Rearrange terms, perform local summation …
𝑃 𝐸
=
𝑑
1
=
𝑍
1
=
𝑍
𝑐
𝑏
π‘Ž
1
Ψ π‘, π‘Ž Ψ π‘, 𝑏 Ψ π‘‘, 𝑐 Ψ(𝐸, 𝑑)
𝑍
Ψ π‘, 𝑏 Ψ π‘‘, 𝑐 Ψ πΈ, 𝑑
𝑑
𝑐
𝑏
Ψ π‘, π‘Ž
π‘Ž
Ψ π‘, 𝑏 Ψ π‘‘, 𝑐 Ψ πΈ, 𝑑 Ψ π‘
𝑑
𝑐
𝑏
18
The Sum-Product Operation
During inference, we try to compute an expression
Sum-product form: 𝑍 Ψ∈𝓕 Ψ
𝓧 = {𝑋1 , … , 𝑋𝑛 } the set of variables
𝓕 a set of factors such that for each Ψ ∈ 𝓕, π‘†π‘π‘œπ‘π‘’ Ψ ∈ 𝓧
𝓨 ⊂ 𝓧 a set of query variables
𝓩 = 𝓧 − 𝓨 the variables to eliminate
The result of eliminating the variables in 𝓩 is a factor
𝜏 𝓨 =
Ψ
𝑧 Ψ∈𝓕
This factor does not necessarily correspond to any probability or
conditional probability in the network.
𝑃 𝓨 =
𝜏(𝓨)
𝜏(𝓨)
19
Inference via Variable Elimination
General Idea
Write query in the form
𝑃 𝑋1 , 𝑒 =
…
π‘₯𝑛
𝑃 π‘₯𝑖 π‘ƒπ‘Žπ‘‹π‘–
π‘₯3 π‘₯2
𝑖
The sum is ordered to suggest an elimination order
Then iteratively
Move all irrelevant terms outside of innermost sum
Perform innermost sum, getting a new term
Insert the new term into the product
Finally renormalize
𝑃 𝑋1 𝑒 =
𝜏 𝑋1 , 𝑒
π‘₯1 𝜏(𝑋1 , 𝑒)
20
A more complex network
A food web
𝐡
𝐢
𝐴
𝐷
𝐸
𝐺
𝐹
𝐻
What is the probability 𝑃 𝐴 𝐻 that hawks are leaving given
that the grass condition is poor?
21
Example: Variable Elimination
Query: 𝑃(𝐴|β„Ž), need to eliminate 𝐡, 𝐢, 𝐷, 𝐸, 𝐹, 𝐺, 𝐻
Initial factors
𝐡
𝐢
𝑃 π‘Ž 𝑃 𝑏 𝑃 𝑐 𝑏 𝑃 𝑑 π‘Ž 𝑃 𝑒 𝑐, 𝑑 𝑃 𝑓 π‘Ž 𝑃 𝑔 𝑒 𝑃 β„Ž 𝑒, 𝑓
Choose an elimination order: 𝐻, 𝐺, 𝐹, 𝐸, 𝐷, 𝐢, 𝐡 (<)
𝐷
𝐸
𝐹
𝐻
𝐺
Step 1: Eliminate G
Conditioning (fix the evidence node on its observed value)
π‘šβ„Ž 𝑒, 𝑓 = 𝑃(𝐻 = β„Ž|𝑒, 𝑓)
𝐴
𝐡
𝐢
𝐴
𝐷
𝐹
𝐸
𝐺
22
Example: Variable Elimination
Query: 𝑃(𝐴|β„Ž), need to eliminate 𝐡, 𝐢, 𝐷, 𝐸, 𝐹, 𝐺
𝐡
𝐢
Initial factors
𝑃 π‘Ž 𝑃 𝑏 𝑃 𝑐 𝑏 𝑃 𝑑 π‘Ž 𝑃 𝑒 𝑐, 𝑑 𝑃 𝑓 π‘Ž 𝑃 𝑔 𝑒 𝑃 β„Ž 𝑒, 𝑓
⇒ 𝑃 π‘Ž 𝑃 𝑏 𝑃 𝑐 𝑏 𝑃 𝑑 π‘Ž 𝑃 𝑒 𝑐, 𝑑 𝑃 𝑓 π‘Ž 𝑃 𝑔 𝑒 π‘šβ„Ž (𝑒, 𝑓)
𝐴
𝐷
𝐸
𝐹
𝐻
𝐺
Step 2: Eliminate 𝐺
Compute π‘šπ‘” 𝑒 =
𝑔𝑃
𝑔𝑒 =1
𝐡
⇒ 𝑃 π‘Ž 𝑃 𝑏 𝑃 𝑐 𝑏 𝑃 𝑑 π‘Ž 𝑃 𝑒 𝑐, 𝑑 𝑃 𝑓 π‘Ž π‘šπ‘” 𝑒 π‘šβ„Ž (𝑒, 𝑓)
⇒ 𝑃 π‘Ž 𝑃 𝑏 𝑃 𝑐 𝑏 𝑃 𝑑 π‘Ž 𝑃 𝑒 𝑐, 𝑑 𝑃 𝑓 π‘Ž π‘šβ„Ž (𝑒, 𝑓)
𝐢
𝐴
𝐷
𝐹
𝐸
23
Example: Variable Elimination
Query: 𝑃(𝐴|β„Ž), need to eliminate 𝐡, 𝐢, 𝐷, 𝐸, 𝐹
𝐡
𝐢
Initial factors
𝑃 π‘Ž 𝑃 𝑏 𝑃 𝑐 𝑏 𝑃 𝑑 π‘Ž 𝑃 𝑒 𝑐, 𝑑 𝑃 𝑓 π‘Ž 𝑃 𝑔 𝑒 𝑃 β„Ž 𝑒, 𝑓
⇒ 𝑃 π‘Ž 𝑃 𝑏 𝑃 𝑐 𝑏 𝑃 𝑑 π‘Ž 𝑃 𝑒 𝑐, 𝑑 𝑃 𝑓 π‘Ž 𝑃 𝑔 𝑒 π‘šβ„Ž 𝑒, 𝑓
⇒ 𝑃 π‘Ž 𝑃 𝑏 𝑃 𝑐 𝑏 𝑃 𝑑 π‘Ž 𝑃 𝑒 𝑐, 𝑑 𝑃 𝑓 π‘Ž π‘šβ„Ž 𝑒, 𝑓
𝐷
𝐸
𝐹
𝐻
𝐺
𝐡
Step 3: Eliminate 𝐹
Compute π‘šπ‘“ 𝑒, π‘Ž =
𝐴
𝑓𝑃
𝑓 π‘Ž π‘šβ„Ž (𝑒, 𝑓)
⇒ 𝑃 π‘Ž 𝑃 𝑏 𝑃 𝑐 𝑏 𝑃 𝑑 π‘Ž 𝑃 𝑒 𝑐, 𝑑 π‘šπ‘“ (𝑒, π‘Ž)
𝐢
𝐴
𝐷
𝐸
24
Example: Variable Elimination
Query: 𝑃(𝐴|β„Ž), need to eliminate 𝐡, 𝐢, 𝐷, 𝐸
𝐢
Initial factors
𝑃 π‘Ž 𝑃
⇒𝑃 π‘Ž
⇒𝑃 π‘Ž
⇒𝑃 π‘Ž
𝑏 𝑃
𝑃 𝑏
𝑃 𝑏
𝑃 𝑏
𝑐𝑏 𝑃
𝑃 𝑐𝑏
𝑃 𝑐𝑏
𝑃 𝑐𝑏
𝐡
π‘‘π‘Ž 𝑃
𝑃 π‘‘π‘Ž
𝑃 π‘‘π‘Ž
𝑃 π‘‘π‘Ž
𝑒 𝑐, 𝑑 𝑃
𝑃 𝑒 𝑐, 𝑑
𝑃 𝑒 𝑐, 𝑑
𝑃 𝑒 𝑐, 𝑑
𝑓 π‘Ž 𝑃 𝑔 𝑒 𝑃 β„Ž 𝑒, 𝑓
𝑃 𝑓 π‘Ž 𝑃 𝑔 𝑒 π‘šβ„Ž 𝑒, 𝑓
𝑃 𝑓 π‘Ž π‘šβ„Ž 𝑒, 𝑓
π‘šπ‘“ π‘Ž, 𝑒
𝐴
𝐷
𝐸
𝐹
𝐻
𝐺
𝐡
Step 3: Eliminate 𝐸
Compute π‘šπ‘’ π‘Ž, 𝑐, 𝑑 =
𝑒𝑃
𝑒 𝑐, 𝑑 π‘šπ‘“ (π‘Ž, 𝑒)
𝐢
𝐴
𝐷
⇒ 𝑃 π‘Ž 𝑃 𝑏 𝑃 𝑐 𝑏 𝑃 𝑑 π‘Ž π‘šπ‘’ (π‘Ž, 𝑐, 𝑑)
25
Example: Variable Elimination
Query: 𝑃(𝐴|β„Ž), need to eliminate 𝐡, 𝐢, 𝐷
𝐢
Initial factors
𝑃 π‘Ž 𝑃
⇒𝑃 π‘Ž
⇒𝑃 π‘Ž
⇒𝑃 π‘Ž
𝑏 𝑃
𝑃 𝑏
𝑃 𝑏
𝑃 𝑏
𝑐𝑏 𝑃
𝑃 𝑐𝑏
𝑃 𝑐𝑏
𝑃 𝑐𝑏
𝐡
π‘‘π‘Ž 𝑃
𝑃 π‘‘π‘Ž
𝑃 π‘‘π‘Ž
𝑃 π‘‘π‘Ž
𝑒 𝑐, 𝑑 𝑃
𝑃 𝑒 𝑐, 𝑑
𝑃 𝑒 𝑐, 𝑑
𝑃 𝑒 𝑐, 𝑑
𝑓 π‘Ž 𝑃 𝑔 𝑒 𝑃 β„Ž 𝑒, 𝑓
𝑃 𝑓 π‘Ž 𝑃 𝑔 𝑒 π‘šβ„Ž 𝑒, 𝑓
𝑃 𝑓 π‘Ž π‘šβ„Ž 𝑒, 𝑓
π‘šπ‘“ π‘Ž, 𝑒
𝐷
𝐸
𝐹
𝐻
𝐺
⇒ 𝑃 π‘Ž 𝑃 𝑏 𝑃 𝑐 𝑏 𝑃 𝑑 π‘Ž π‘šπ‘’ π‘Ž, 𝑐, 𝑑
Step 3: Eliminate 𝐷
𝐴
𝐡
𝐴
𝐢
Compute π‘šπ‘‘ π‘Ž, 𝑐 = 𝑑 𝑃 𝑑 π‘Ž π‘šπ‘’ (π‘Ž, 𝑐, 𝑑)
⇒ 𝑃 π‘Ž 𝑃 𝑏 𝑃 𝑐 𝑏 π‘šπ‘‘ (π‘Ž, 𝑐)
26
Example: Variable Elimination
Query: 𝑃(𝐴|β„Ž), need to eliminate 𝐡, 𝐢
𝐢
Initial factors
𝑃 π‘Ž 𝑃
⇒𝑃 π‘Ž
⇒𝑃 π‘Ž
⇒𝑃 π‘Ž
𝑏 𝑃
𝑃 𝑏
𝑃 𝑏
𝑃 𝑏
𝑐𝑏 𝑃
𝑃 𝑐𝑏
𝑃 𝑐𝑏
𝑃 𝑐𝑏
𝐡
π‘‘π‘Ž 𝑃
𝑃 π‘‘π‘Ž
𝑃 π‘‘π‘Ž
𝑃 π‘‘π‘Ž
𝑒 𝑐, 𝑑 𝑃
𝑃 𝑒 𝑐, 𝑑
𝑃 𝑒 𝑐, 𝑑
𝑃 𝑒 𝑐, 𝑑
𝑓 π‘Ž 𝑃 𝑔 𝑒 𝑃 β„Ž 𝑒, 𝑓
𝑃 𝑓 π‘Ž 𝑃 𝑔 𝑒 π‘šβ„Ž 𝑒, 𝑓
𝑃 𝑓 π‘Ž π‘šβ„Ž 𝑒, 𝑓
π‘šπ‘“ π‘Ž, 𝑒
𝐴
𝐷
𝐸
𝐻
𝐺
⇒ 𝑃 π‘Ž 𝑃 𝑏 𝑃 𝑐 𝑏 𝑃 𝑑 π‘Ž π‘šπ‘’ π‘Ž, 𝑐, 𝑑
⇒ 𝑃 π‘Ž 𝑃 𝑏 𝑃 𝑐 𝑏 π‘šπ‘‘ π‘Ž, 𝑐
𝐹
𝐡
𝐴
𝐢
Step 3: Eliminate 𝐢
Compute π‘šπ‘ π‘Ž, 𝑏 =
⇒ 𝑃 π‘Ž 𝑃 𝑏 π‘šπ‘ (π‘Ž, 𝑏)
𝑐𝑃
𝑐 𝑏 π‘šπ‘‘ (π‘Ž, 𝑐)
27
Example: Variable Elimination
Query: 𝑃(𝐴|β„Ž), need to eliminate 𝐡
𝐢
Initial factors
𝑃 π‘Ž 𝑃
⇒𝑃 π‘Ž
⇒𝑃 π‘Ž
⇒𝑃 π‘Ž
𝑏 𝑃
𝑃 𝑏
𝑃 𝑏
𝑃 𝑏
𝑐𝑏 𝑃
𝑃 𝑐𝑏
𝑃 𝑐𝑏
𝑃 𝑐𝑏
𝐡
π‘‘π‘Ž 𝑃
𝑃 π‘‘π‘Ž
𝑃 π‘‘π‘Ž
𝑃 π‘‘π‘Ž
𝑒 𝑐, 𝑑 𝑃
𝑃 𝑒 𝑐, 𝑑
𝑃 𝑒 𝑐, 𝑑
𝑃 𝑒 𝑐, 𝑑
𝑓 π‘Ž 𝑃 𝑔 𝑒 𝑃 β„Ž 𝑒, 𝑓
𝑃 𝑓 π‘Ž 𝑃 𝑔 𝑒 π‘šβ„Ž 𝑒, 𝑓
𝑃 𝑓 π‘Ž π‘šβ„Ž 𝑒, 𝑓
π‘šπ‘“ π‘Ž, 𝑒
⇒ 𝑃 π‘Ž 𝑃 𝑏 𝑃 𝑐 𝑏 𝑃 𝑑 π‘Ž π‘šπ‘’ π‘Ž, 𝑐, 𝑑
⇒ 𝑃 π‘Ž 𝑃 𝑏 𝑃 𝑐 𝑏 π‘šπ‘‘ π‘Ž, 𝑐
⇒ 𝑃 π‘Ž 𝑃 𝑏 π‘šπ‘ π‘Ž, 𝑏
𝐴
𝐷
𝐸
𝐹
𝐻
𝐺
𝐡
𝐴
𝐢
Step 3: Eliminate 𝐢
Compute π‘šπ‘ π‘Ž =
⇒ 𝑃 π‘Ž π‘šπ‘ (π‘Ž)
𝑏 𝑃(𝑏)π‘šπ‘ (π‘Ž, 𝑏)
28
Example: Variable Elimination
Query: 𝑃(𝐴|β„Ž), need to renormalize over 𝐴
𝐢
Initial factors
𝑃 π‘Ž 𝑃
⇒𝑃 π‘Ž
⇒𝑃 π‘Ž
⇒𝑃 π‘Ž
𝑏 𝑃
𝑃 𝑏
𝑃 𝑏
𝑃 𝑏
⇒𝑃
⇒𝑃
⇒𝑃
⇒𝑃
𝑃 𝑏 𝑃 𝑐 𝑏 𝑃 𝑑 π‘Ž π‘šπ‘’ π‘Ž, 𝑐, 𝑑
𝑃 𝑏 𝑃 𝑐 𝑏 π‘šπ‘‘ π‘Ž, 𝑐
𝑃 𝑏 π‘šπ‘ π‘Ž, 𝑏
π‘šπ‘ π‘Ž
π‘Ž
π‘Ž
π‘Ž
π‘Ž
𝐡
𝑐𝑏 𝑃
𝑃 𝑐𝑏
𝑃 𝑐𝑏
𝑃 𝑐𝑏
π‘‘π‘Ž 𝑃
𝑃 π‘‘π‘Ž
𝑃 π‘‘π‘Ž
𝑃 π‘‘π‘Ž
𝑒 𝑐, 𝑑 𝑃
𝑃 𝑒 𝑐, 𝑑
𝑃 𝑒 𝑐, 𝑑
𝑃 𝑒 𝑐, 𝑑
𝑓 π‘Ž 𝑃 𝑔 𝑒 𝑃 β„Ž 𝑒, 𝑓
𝑃 𝑓 π‘Ž 𝑃 𝑔 𝑒 π‘šβ„Ž 𝑒, 𝑓
𝑃 𝑓 π‘Ž π‘šβ„Ž 𝑒, 𝑓
π‘šπ‘“ π‘Ž, 𝑒
𝐴
𝐷
𝐸
𝐹
𝐻
𝐺
𝐡
𝐴
𝐢
Step 3: renormalize
𝑃 π‘Ž, β„Ž = 𝑃 π‘Ž π‘šπ‘ π‘Ž , compute 𝑃(β„Ž) =
⇒𝑃 π‘Žβ„Ž =
π‘Žπ‘ƒ
π‘Ž π‘šπ‘ (π‘Ž)
𝑃 π‘Ž π‘šπ‘ (π‘Ž)
π‘Ž 𝑃 π‘Ž π‘šπ‘ (𝐴)
29
Complexity of variable elimination
Suppose in one elimination step we compute
π‘šπ‘₯ 𝑦1 , … , π‘¦π‘˜ =
π‘šπ‘₯′ π‘₯, 𝑦1 , … , π‘¦π‘˜
′
π‘₯ π‘šπ‘₯ (π‘₯, 𝑦1 , … , π‘¦π‘˜ )
= π‘˜π‘–=1 π‘šπ‘– π‘₯, 𝑦𝑐𝑖
𝑋
𝑦1
This requires
π‘˜ ∗ π‘‰π‘Žπ‘™ 𝑋 ∗
𝑖
π‘‰π‘Žπ‘™ π‘Œπ‘π‘–
𝑦𝑖
π‘¦π‘˜
multiplications
For each value of π‘₯, 𝑦1 , … , π‘¦π‘˜ , we do k multiplications
π‘‰π‘Žπ‘™ 𝑋 ∗
𝑖
π‘‰π‘Žπ‘™ π‘Œπ‘π‘–
additions
For each value of 𝑦1 , … , π‘¦π‘˜ , we do π‘‰π‘Žπ‘™ 𝑋 additions
Complexity is exponential in the number of variables in the
intermediate factor
30
From Variable Elimination to Message Passing
Recall that induced dependency during marginalization is
captured in elimination cliques
Summation  Elimination
Intermediate term  Elimination cliques
Can this lead to an generic inference algorithm?
31
Tree Graphical Models
Undirected tree: a unique path
between any pair of nodes
Directed tree: all nodes except
the root have exactly one parent
32
Equivalence of directed and undirected trees
Any undirected tree can be converted to a directed tree by
choosing a root node and directing all edges away from it
A directed tree and the corresponding undirected tree make
the conditional independence assertions
Parameterization are essentially the same
Undirected tree: 𝑃 𝑋 =
1
𝑍
Directed tree: 𝑃 𝑋 = 𝑃 π‘‹π‘Ÿ
𝑖∈V
Ψ π‘‹π‘–
(𝑖,𝑗)∈E Ψ(𝑋𝑖 , 𝑋𝑗 )
𝑖,𝑗 ∈𝐸 𝑃(𝑋𝑗 |𝑋𝑖 )
Equivalence: Ψ π‘‹π‘– = 𝑃 π‘‹π‘Ÿ , Ψ π‘‹π‘– , 𝑋𝑗 = 𝑃 𝑋𝑗 𝑋𝑖 , 𝑍 =
1, Ψ π‘‹π‘– = 1
33
From Variable Elimination to Message Passing
Recall Variable Elimination Algorithm
Choose an ordering in which the query node 𝑓 is the final node
Eliminate node 𝑖 by removing all potentials containing 𝑖, take
sum/product over π‘₯𝑖
Place the resultant factor back
For a Tree graphical model:
Choose query node f as the root of the tree
View tree as a directed tree with edges pointing towards 𝑓
Elimination of each node can be considered as message-passing
directly along tree branches, rather than on some transformed
graphs
Thus, we can use the tree itself as a data-structure to inference
34
Message passing for trees
Let π‘šπ‘–π‘— 𝑋𝑖 denote the factor resulting from eliminating
variables from below up to 𝑖, which is a function 𝑋𝑖
π‘šπ‘—π‘– 𝑋𝑖 =
π‘₯𝑗
Ψ π‘₯𝑗 Ψ π‘‹π‘– , π‘₯𝑗
π‘˜∈𝑁 𝑗 \i π‘šπ‘˜π‘— (π‘₯𝑗 )
This is like a message sent from 𝑗 to 𝑖
π‘˜
π‘šπ‘˜π‘— 𝑋𝑗
π‘šπ‘—π‘– 𝑋𝑖
𝑗
𝑙
π‘šπ‘™π‘— 𝑋𝑗
π‘šπ‘—π‘– 𝑋𝑓
𝑖
𝑓
𝑃 π‘₯𝑓 ∝ Ψ π‘₯𝑓
π‘šπ‘’π‘“ (π‘₯𝑓 )
𝑒∈𝑁 𝑓
π‘šπ‘’π‘“ (π‘₯𝑓 ) represents a belief on π‘₯𝑓 from π‘₯𝑒
35
Download