Large Lump Detection via Edge Tracking and Classification

advertisement
Large Lump Detection via Edge Tracking and Classification
By the rule of probability and a mild conditional independence assumption we can write
the current posterior density as:
1 M
P(ht , d t | I 1:t ) 
(1)
 P(ht , d t | ht 1  htn1 , d t 1  d tn1 , I1:t ),
M n 1
where (htn1 , d tn1 ) ~ P(ht 1 , d t 1 | I 1:t 1 ) are samples from the posterior density at the
previous time point. The ‘mild’ conditional independence assumption is as follows:
P(ht 1 , d t 1 | I1:t )  P(ht 1 , d t 1 | I1:t 1 ). The meaning of the random variables ht, dt will be
made clear shortly. I’s denote the image frames.
If we need to keep this recursion going then it is crucial to model the probability density:
P(ht , d t | ht 1  htn1 , d t 1  d tn1 , I 1:t ) . Let us first discuss what ht and dt stand for. After
detecting edges on the tth frame It, we chop each edge segment into several pieces in such
a way that the length of each edge piece is at most k edgels and we obtain as many klength edge pieces as possible from each edge segment. k is a user set parameter. We call
this set of edge pieces obtained from It as the feature set Ft. We now define dt and ht as
follows:
(2)
d t : Ft  {1,1}
and
(3)
ht : Ft  Ft 1  {},
where  denotes the null element. Note that Ft-1 is a similar set of edge pieces on the (t1)th frame It-1. For an edge piece i in Ft, if dt(i) = 1, then the edge piece i in Ft belongs to a
large lump. On the other hand if dt(i) = –1, then it does not belong to a large lump. ht is
essentially a mapping between the edge pieces in Ft and those in Ft-1. For example, ht(i) =
j means the ith edge piece in Ft corresponds to the jth edge piece in Ft-1. If ht(i) = , then
the ith edge piece is left unassigned.
We want to model the probability P(ht , d t | ht 1  htn1 , d t 1  d tn1 , I 1:t ) in a conditional
random field framework taking into account the following factors:
(A) The ith edge piece in Ft, the ht(i)th edge piece in Ft-1, and ht-1(ht(i))th edge piece in Ft-2
follows a motion model. As for an example, we may want the centroids of these three
edge pieces to be collinear or nearly collinear.
(B) Let us denote by Patch(i) as an image patch around the centroid of the ith edge piece
in Ft. Similarly let Patch(ht(i)) and Patch(ht-1(ht(i))) denote image patches around the
centroids of respectively the ht(i)th edge piece in Ft-1 and the ht-1(ht(i))th edge piece in
Ft-2. We may require Patch(i), Patch(ht(i)), and Patch(ht-1(ht(i))) be similar in some
sense.
(C) We can impose a pairwise neighborhood structure on the random variables ht. For
example for two neighboring edge pieces i and j in Ft, we may encourage that ht(i)
and ht(j) be different from each other. This way a one-one correspondence is
encouraged for the mapping ht. The neighborhood can be determined by a circle of a
user supplied radius r: two edge pieces in Ft are neighbors when the Euclidean
distance between their centroids is less than or equal to r.
(D) For dt we should encourage that dt(i) and dt-1(ht(i)) be same for any edge piece i in
Ft.
(E) For each edge piece i in Ft, we can obtain the output of a trained classifier f(i) {–1,
+1}, where as before +1 denotes that the edge piece i belongs to a large lump and –1
denotes it does not. We may now encourage that dt(i) be the same as the output f(i) of
the classifier. Note that this classifier can be extremely flexible in terms of using the
image features, as the entire set of frames I1, I2,…, It is available for making this
decision. However, most likely to keep this task simple and effective only It will be
used in the classifier.
Taking the aforementioned factors into account we can have an exponential form for the
probability density: P(ht , d t | ht 1  htn1 , d t 1  d tn1 , I 1:t ) , where all these factors appear as
a weighted sum. We now face two problems: (a) the learning of these weights, and (b) the
sampling inference for continuing the recursion (1). The former task may not be simple,
however the latter is straightforward. We can perform Metropolis-Hastings (MH)
sampling on this density.
In the MH algorithm the proposals for dt can be as follows: invert the signs of all the edge
segments that belong to a connected edge segment detected from It. This way, we will
only obtain +1 or –1 for an entire edge segment, and not any other assignments with
hardly any practical consequence. For a proposal for ht(i) we can look for the edge pieces
in Ft-1 that are within some spatial proximity with the ith edge piece on Ft. We can choose
uniformly from these candidates. Also to allow for null assignments we can have  added
to this list of candidates for ht(i). Note that for practical reasons one should assign  as
less often as possible. This behavior can be encouraged while designing the density
P(ht , d t | ht 1  htn1 , d t 1  d tn1 , I 1:t ) .
Note further that if MH turns out to be taking too long to finish, we can approximate the
recursion (1) by a single mean path, i.e., M = 1 in (1). For this case, we can employ
dynamic programming for the inference recursion to determine the single mean sample
path.
Download