Dikarina G., Minkina G., Repin A., Tashlinskii A.

advertisement
72
PSEUDOGRADIENT OPTIMIZATION IN THE PROBLEM
OF IMAGE INTERFRAME GEOMETRICAL
DEFORMATIONS ESTIMATION1
G. V. Dikarina2, G. L. Minkina2, A. I. Repin2, A. G. Tashlinskii2
1 Ul’yanovsk
State Technical University
432027, Russia, Ul’yanovsk, ul. Severnyi venets, 32,
phone (88422)430974, e-mail: tag@ulstu.ru
When pseudogradient estimating of image parameters the estimate convergence
character and computational expenses depend on the image samples local sample size to
be is used for the pseudogradient finding. In the work the solving local sample size
optimization problem on criterion of computational expenses minimum when image
interframe geometrical deformations estimating is considered.
When solving image interframe geometrical
deformations (IIGD) estimation problem
pseudogradient procedures (PGPs) are applied
[1]


ˆ t 1  
ˆ t  Λt 1t 1 Q Zt 1, 
ˆt ,

where  – vector of IIGD parameters to be
estimated Ζ (1)  {z (j1) } and Ζ( 2)  {z (j2) } ;  –
pseudogradient of the goal function Q
characterizing estimation quality; Λ t – gain
matrix determining a parameters estimate
increment at the t -th iteration; Z t 1 – local
sample of images Ζ (1) and Ζ( 2) samples to be
used to find  at the t  1 -th iteration; Ζ (1)
and Ζ( 2) – images to be observed. It is obvious
the local sample size (LSS)  determines
computational expenses for PGP realization in
many respects.
Let us consider the problem of PGP LSS
optimization on criterion of computational
expenses minimum when estimating of one
parameter  of IIGD. At that the module of
the error   ˆ   of parameter estimate in
the estimation process have to vary from some
maximum  max to some minimum  min
Let us denote computational expenses with
d  t  which are necessary to perform the PGP
t -th iteration when the local sample size is
equal to  t and total computational expenses
which are required for decreasing of the error
from  max to  min with DT    d  t  ,
T
t 1
where T – number of iterations which is
required for condition T   min fulfillment.
Besides that let us denote the relation of
computational expenses at the t -th iteration to
the mathematical expectation Mv ̂  of the
parameter  estimate convergence rate
(computational expenses per one Mv ̂  of the
parameter estimate convergence rate) with
reduced computational expenses
d*  t  
d  t 
,
Mv ˆ 

where
Mv ˆ    ˆ wt 1 ˆ   wt ˆ dˆ
numerically equal to the difference between
mathematical expectations of estimate at the
t -th and t  1 -th iterations; wt 1 ˆ  –
values, where  – an exact parameter value.
_______________________________________________________________________
1
is
0
The work was supported by the Russian Foundation for Basic Research, project no. 07-01-00138-a
73
probability distribution density (PDD) of the
estimate ̂ .
Minimum of computational expenses at each
iteration will be ensured at the LSS enabling
minimum of reduced computational expenses.
Such LSS for the t -th iteration will be
assumed optimal and denoted with  *t :
*t  k
d* k  min d*  t 
, t  1, 2, ..., k , ... . (1)
The parameter estimate error for T iterations
of PGP has to change from  max to  min , so
the choice of LSS at each iteration according
to (1) ensures minimum of the total
 
computational expenses Dmin T    d *t .
T
t 1
Let us consider the algorithm of finding
optimal dependence LSS on the number of
iterations
(2)
*t , t  1, T .
For concreteness let us accept some
assumptions. Let us part the computational
expenses d t  at one iteration performing by
the algorithm on two constituent: expenses to
form local sample ( d L t  ) and other
computational expenses ( d A t  )
d t   d L t   d A t  .
Besides for simplicity let us assume that
expenses d L t  for forming local sample are
proportionate to size  t of the local sample:
d L t   t  d , where  d – computational
expenses at   1 . Then,
 d  

d t    d  A t  t  .
 d

Assuming mentioned limits for algorithm
construction to find (2) it is necessary to find
the expression for parameter estimate
convergence rate mathematical expectation
Mv ̂  calculation. It can be done through
PDD of the error  . However in this work let
us consider the simplified method of Mv ̂ 
calculation based on the usage of step  t of
parameter  estimate variation at the t -th
iteration, mathematical expectation M t 1  of
the estimate error and probability of estimate
improvement
(  t 1  )
and
estimate
deterioration (    t 1  ) at the t  1 -th
iteration [2]. Really the forecast of the
mathematical expectation M t  of estimate
error at the t -th iteration can be represented in
the following form


Mt   Mt 1   t  t 1    t 1  .
Then
Mvˆ   Mt 1   Mt   t  t 1    t 1  .
Thus the mathematical expectation Mv ̂  of
parameter estimate convergence rate at the t th iteration can be expressed through    t 1  ,


   t 1  and step  t .
Let us assume  0  t 1   0 , then


Mv ˆ    t 2 t 1   1 .   t t 1  ,
t 1   2 t 1   1 – estimate
where
improvement coefficient (EIC). In the
beginning of calculation as the estimate initial
approximation ( t  0 ) the maximum error of
parameter 0   max is specified. Then for the
first iteration simulation a variable t
corresponding to the iteration number
increases by one. When simulating any
iteration the initial value of the LSS is equal to
one   1 . For the given values  и  t the
EIC  t ,   , computational expenses d 
and estimate convergence rate Mv   at the
given error  t and LSS  are calculated. Then
reduced computational expenses d*  which
are compared with expenses obtained with the
LSS d*  are found. If d*   d*   1
then LSS increases by one and the next value
d*   1 is calculated. If d*   d*   1
then optimal value of LSS for the t -th
iteration, equal to  *t    1 remember. Then a
new
estimate
error
is
calculated
t 1  t  Mt . If t 1   min , optimal LSS
for the next t  1 -th iteration is found, if
74
t 1   min then the algorithm operation has
been finished.
Examples of optimal LSS as function on error
 h of two images parallel shift estimate at the
parameter value d A t   d equal to 25 and
16,6 are given in Fig. 1,a and Fig. 1,b
correspondingly.
When calculating the adaptive model of
images
z (j1)  s (j1)  (j1) ,
z  s
( 2)
j
  
(1)
j

 to be observed was
s  – desired random field
()  (j2)
accepted, where (j1)
with known monotonically decreasing
autocorrelation function; (j1) , (j2)
–
interfering independent Gaussian random
2
fields with zero mean and equal variances   .
In the given plots the lower curve corresponds
to the noise absence, middle – signal-noise
relation 8, upper – 15.
   
25 
Analysis of dependences shows that in the
absence of noise optimal LSS monotonically
decreases when decreasing estimate error. In
the presence of noise with small errors LSS
increases again that is due to EIC decreasing.
Besides at the smaller part of computational
expenses the range of optimal LSS change to
form LSS is larger.
In the table numerical results showing the loss
in computational expenses at the constant LSS
equal to 1,  m and  m  3 in comparison with
the case of optimal LSS usage are presented,
1 T *
where  m  int
 t – average LSS at the
T t 1
iteration, T – total number of PGP iterations.
0, 2, 5, 10, 20. It is obvious when increasing
signal-noise relation the loss decreases.
Table. The loss in computational expenses
in comparison with the case of optimal LSS
usage, per cent
Relation
signalnoise
*
g 0
g2
g 5
g  10
g  20
20
15
10
t
5
1
31
61
91
121
151
а)
*
Local sample size to be used
 1
m  3
m
m  3
48,8
4,63
3,78
5,75
60,9
0,58
0,43
0,81
59,8
1,23
1,3
1,91
58,2
2,18
1,74
3,22
56,7
2,72
2,53
4,46
Thus the proposed method enables to solve
problem of optimal LSS calculation on
criterion of computational expenses minimum.
References
16
12
8
t
4
1
31
61
91
121
151
b)
Fig. 1. The dependence of optimal LSS
on iteration number
1. Tashlinskii Alexandr. Computational Expenditure
Reduction in Pseudo-Gradient Image Parameter
Estimation / Computational Scince – ICCS 2003.
Vol. 2658. Proceeding, Part II. – Berlin: Springer,
2003. – Pp. 456-462.
2. Tashlinskii A.G. The Efficiency of Pseudogradient
Procedures for the Estimation of Image Parameters
with a Finite Number of Iterations / Pattern
Recognition and Image Analysis, Vol.8, 1998. –
Pp. 260-261.
Download