7 Gauss- Jacobi Combinatorial Algorithm

advertisement
Algebraic Geodesy and Geoinformatics - 2009 - PART I METHODS
7 Gauss- Jacobi Combinatorial Algorithm
7- 1 Linear model
Another technique to solve overdetermined system is proposed by Gauss and Jacobi. The Gauss-Jacobi combinatorial
solution can be employed originally for linear regression problem. This means, if we have more independent equations, m
than variables, n, so m > n, then the solution - in least-squares sense - can be achieved by solving the
m
n
combinatorial
subsets of m equations, and then weighting these solutions properly. This method can be extended to nonlinear case using
linearization. A key point of this extention, is that we can solve nonlinear subsets without a priori information about the
solutions, which would be necessary for the traditional linearization. The determination of weights through linearization
takes place after the solutions of these nonlinear subsets. Therefore Gauss- Jacobi method can be considered as a gobal
method
First, let us illustrate this method for linear system. We consider a system with m = 2 unknowns, (x, y) and with n = 3
equations, (eq1, eq2, eq3).
Clear@"Global‘*"D
eq1 = a1 x + b1 y - y1 Š 0;
eq2 = a2 x + b2 y - y2 Š 0;
eq3 = a3 x + b3 y - y3 Š 0;
We shall show that in case of linear system, the result of the least square solution and that of the weighted combinatorial
solution are the same. The least square solution can be computed in the following way:
- the matrix form of the system,
8b, A< = CoefficientArrays @8eq1, eq2, eq3<, 8x, y<D;
MatrixForm@AD
a1 b1
a2 b2
a3 b3
b = - b; MatrixForm@bD
y1
y2
y3
- then the least square solution is
2
GaussJacobi_07.nb
xy = Simplify@PseudoInverse @AD.b, Element@8a1, a2, a3, b1, b2, b3, y1, y2, y3<, RealsDD;
MatrixForm@xyD
a3 I-b1 b3 y1-b2 b3 y2+b12 y3+b22 y3M+a1 Ib22 y1+b32 y1-b1 b2 y2-b1 b3 y3M+a2 I-b1 b2 y1+b12 y2+b3 Hb3 y2-b2 y3LM
a32 Ib12 +b22 M-2 a1 a3 b1 b3-2 a2 b2 Ha1 b1+a3 b3L+a22 Ib12 +b32 M+a12 Ib22 +b32 M
a32 Hb1 y1+b2 y2L-a1 a3 Hb3 y1+b1 y3L-a2 Ha1 b2 y1+a1 b1 y2+a3 b3 y2+a3 b2 y3L+a22 Hb1 y1+b3 y3L+a12 Hb2 y2+b3 y3L
a32 Ib12 +b22 M-2 a1 a3 b1 b3-2 a2 b2 Ha1 b1+a3 b3L+a22 Ib12 +b32 M+a12 Ib22 +b32 M
Now let us consider the solutions of the combinatorial pairs
A12 = CoefficientArrays @8eq1, eq2<, 8x, y<D@@2DD;
xy12 = LinearSolve@A12, 8y1, y2<D; MatrixForm@xy12D
b2 y1-b1 y2
-a2 b1+a1 b2
a2 y1-a1 y2
a2 b1-a1 b2
A13 = CoefficientArrays @8eq1, eq3<, 8x, y<D@@2DD;
xy13 = LinearSolve@A13, 8y1, y3<D; MatrixForm@xy13D
b3 y1-b1 y3
-a3 b1+a1 b3
a3 y1-a1 y3
a3 b1-a1 b3
A23 = CoefficientArrays @8eq2, eq3<, 8x, y<D@@2DD;
xy23 = LinearSolve@A23, 8y2, y3<D; MatrixForm@xy23D
b3 y2-b2 y3
-a3 b2+a2 b3
a3 y2-a2 y3
a3 b2-a2 b3
the weights are the square of the corresponding determinants,
Π12 = Ha1 b2 - a2 b1L2 ;
Π13 = Ha1 b3 - a3 b1L2 ;
Π23 = Ha2 b3 - a3 b2L2 ;
leading to the weighted combinatorical solution for variable x as
Π12 xy12@@1DD + Π13 xy13@@1DD + Π23 xy23@@1DD
xC =
Π12 + Π13 + Π23
 Simplify
HHa2 b1 - a1 b2L H- b2 y1 + b1 y2L + Ha3 b1 - a1 b3L H- b3 y1 + b1 y3L +
Ha3 b2 - a2 b3L H- b3 y2 + b2 y3LL ‘ IHa2 b1 - a1 b2L2 + Ha3 b1 - a1 b3L2 + Ha3 b2 - a2 b3L2 M
Similarly, the solution for y is given by
Π12 xy12@@2DD + Π13 xy13@@2DD + Π23 xy23@@2DD
yC =
Π12 + Π13 + Π23
 Simplify
Ia32 Hb1 y1 + b2 y2L - a1 a3 Hb3 y1 + b1 y3L -
a2 Ha1 b2 y1 + a1 b1 y2 + a3 b3 y2 + a3 b2 y3L + a22 Hb1 y1 + b3 y3L + a12 Hb2 y2 + b3 y3LM ‘
Ia32 Ib12 + b22 M - 2 a1 a3 b1 b3 - 2 a2 b2 Ha1 b1 + a3 b3L + a22 Ib12 + b32 M + a12 Ib22 + b32 MM
This Gauss-Jacobi combinatorial solution and the least square solution are the same, indeed
FullSimplify @8xC, yC< - xyD
80, 0<
7- 2 Nonlinear model
GaussJacobi_07.nb
3
7- 2 Nonlinear model
7- 2- 1 Arithmetical Average of the Combinatorial Solution
In order to employ this result for a nonlinear model, this model should be linearized, but where? To solve this problem the
following algorithm may be used, improving the solution step by step,
- Compute the combinatorial solutions of the nonlinear model via resultants or Groebner basis, alternatively with global
numerical methods, like homotopy,
- Compute the arithmetical average of these solutions,
- Consider this arithmetical average as the point, where the Jacobian should be evaluated and employ weighting technique,
- Linearize the nonlinear system at this weighted result, and solve it as a linear least square problem.
Let us illustrate this technique with the combinatorial solution of the 2D positioning problem. As we have seen in Section 64- 1, this problem can be solved easily via ALESS using Groebner basis.
Considering 4 points with known coordinates, Pi (xi ,yi ) and their distances (ti ) from a point P0 with unknown coordinates
(x0 ,y0 ), which should be computed.
The values are presented in the following table,
Table 7.1 Data for 2D Positioning problem
Pi
1
2
3
4
x@mD
48 177.62
49 600.15
49 830.93
47 863.91
y@mD
ti @mD
6531.28 611.023
7185.19 1529.482
5670.69 1323.884
5077.24 1206.524
The coordinates and the distances,
x1 = 48 177.62; x2 = 49 600.15; x3 = 49 830.93; x4 = 47 863.91;
y1 = 6531.28; y2 = 7185.19; y3 = 5670.69; y4 = 5077.24;
t1 = 611.023; t2 = 1529.482; t3 = 1323.884; t4 = 1206.524;
The lists of variables,
X = 8x1, x2, x3, x4<; Y = 8y1, y2, y3, y4<; T = 8t1, t2, t3, t4<;
The observational equations are
f1 = Hx1 - x0L ^ 2 + Hy1 - y0L ^ 2 - t12 ;
f2 = Hx2 - x0L ^ 2 + Hy2 - y0L ^ 2 - t22 ;
f3 = Hx3 - x0L ^ 2 + Hy3 - y0L ^ 2 - t32 ;
f4 = Hx4 - x0L ^ 2 + Hy4 - y0L ^ 2 - t42 ;
So we have 4 equations and 2 unknowns,
m = 4; n = 2;
The number of the combinations,
4
GaussJacobi_07.nb
mn = Binomial@m, nD
6
The combinatorial pairs,
R = Subsets@Range@mD, 8n<D
881, 2<, 81, 3<, 81, 4<, 82, 3<, 82, 4<, 83, 4<<
The corresponding data set is in form of {xi ,yi , ti , xj ,yj , tj }
datac =
Map@8X@@ð@@1DDDD, Y@@ð@@1DDDD, T@@ð@@1DDDD, X@@ð@@2DDDD, Y@@ð@@2DDDD, T@@ð@@2DDDD< &, RD
8848 177.6,
848 177.6,
848 177.6,
849 600.2,
849 600.2,
849 830.9,
6531.28,
6531.28,
6531.28,
7185.19,
7185.19,
5670.69,
611.023,
611.023,
611.023,
1529.48,
1529.48,
1323.88,
49 600.2,
49 830.9,
47 863.9,
49 830.9,
47 863.9,
47 863.9,
7185.19,
5670.69,
5077.24,
5670.69,
5077.24,
5077.24,
1529.48<,
1323.88<,
1206.52<,
1323.88<,
1206.52<,
1206.52<<
The solutions of a general pair can be computed via reduced Groebner basis. The system,
F = 9Hxi - x0L ^ 2 + Hyi - y0L ^ 2 - ti2 , Hxj - x0L ^ 2 + Hyj - y0L ^ 2 - tj2 =;
The second order polynomial for the x0 coordinate,
px0 = GroebnerBasis @F, 8x0, y0<, 8y0<D
9ti4 - 2 ti2 tj2 + tj4 + 4 ti2 x0 xi - 4 tj2 x0 xi - 2 ti2 xi2 + 2 tj2 xi2 + 4 x02 xi2 - 4 x0 xi3 +
xi4 - 4 ti2 x0 xj + 4 tj2 x0 xj - 8 x02 xi xj + 4 x0 xi2 xj + 2 ti2 xj2 - 2 tj2 xj2 + 4 x02 xj2 +
4 x0 xi xj2 - 2 xi2 xj2 - 4 x0 xj3 + xj4 - 2 ti2 yi2 - 2 tj2 yi2 + 4 x02 yi2 - 4 x0 xi yi2 +
2 xi2 yi2 - 4 x0 xj yi2 + 2 xj2 yi2 + yi4 + 4 ti2 yi yj + 4 tj2 yi yj - 8 x02 yi yj +
8 x0 xi yi yj - 4 xi2 yi yj + 8 x0 xj yi yj - 4 xj2 yi yj - 4 yi3 yj - 2 ti2 yj2 - 2 tj2 yj2 +
4 x02 yj2 - 4 x0 xi yj2 + 2 xi2 yj2 - 4 x0 xj yj2 + 2 xj2 yj2 + 6 yi2 yj2 - 4 yi yj3 + yj4 =
its solution in general,
sol = ∆ . SolveAa2 ∆2 + a1 ∆ + a0 Š 0, ∆E
:
- a1 -
a12 - 4 a0 a2
- a1 +
a12 - 4 a0 a2
,
2 a2
2 a2
>
It means we have two solutions.
The coefficients
A0 = Coefficient@px0, x0, 0D
9ti4 - 2 ti2 tj2 + tj4 - 2 ti2 xi2 + 2 tj2 xi2 + xi4 + 2 ti2 xj2 - 2 tj2 xj2 - 2 xi2 xj2 + xj4 2 ti2 yi2 - 2 tj2 yi2 + 2 xi2 yi2 + 2 xj2 yi2 + yi4 + 4 ti2 yi yj + 4 tj2 yi yj - 4 xi2 yi yj 4 xj2 yi yj - 4 yi3 yj - 2 ti2 yj2 - 2 tj2 yj2 + 2 xi2 yj2 + 2 xj2 yj2 + 6 yi2 yj2 - 4 yi yj3 + yj4 =
A1 = Coefficient@px0, x0, 1D
94 ti2 xi - 4 tj2 xi - 4 xi3 - 4 ti2 xj + 4 tj2 xj + 4 xi2 xj +
4 xi xj2 - 4 xj3 - 4 xi yi2 - 4 xj yi2 + 8 xi yi yj + 8 xj yi yj - 4 xi yj2 - 4 xj yj2 =
A2 = Coefficient@px0, x0, 2D
94 xi2 - 8 xi xj + 4 xj2 + 4 yi2 - 8 yi yj + 4 yj2 =
The function which computes the x0 coordinates for a pair {i, j}
GaussJacobi_07.nb
The function which computes the x0 coordinates for a pair {i, j}
X0@8xi_, yi_, ti_, xj_, yj_, tj_<D = sol . 8a0 ® A0, a1 ® A1, a2 ® A2<  Flatten;
For example in case of pair {1, 2}
X0@datac@@1DDD
848 071.6, 48 565.3<
The values of all of the x0 coordinates are,
x0c = Map@X0@ðD &, datacD
8848 071.6, 48 565.3<, 848 565.3, 48 786.9<, 847 629.7, 48 565.3<,
848 565.3, 50 923.5<, 848 565.3, 48 693.<, 848 565.3, 48 991.2<<
The same computations can be done for y0, namely
py0 = GroebnerBasis @F, 8x0, y0<, 8x0<D
9ti4 - 2 ti2 tj2 + tj4 - 2 ti2 xi2 - 2 tj2 xi2 + xi4 + 4 ti2 xi xj + 4 tj2 xi xj - 4 xi3 xj 2 ti2
4 ti2
2 xi2
4 xi2
2 xi2
xj2 - 2 tj2 xj2 + 6 xi2 xj2 - 4 xi xj3 + xj4 + 4 xi2 y02 - 8 xi xj y02 + 4 xj2 y02 +
y0 yi - 4 tj2 y0 yi - 4 xi2 y0 yi + 8 xi xj y0 yi - 4 xj2 y0 yi - 2 ti2 yi2 + 2 tj2 yi2 +
yi2 - 4 xi xj yi2 + 2 xj2 yi2 + 4 y02 yi2 - 4 y0 yi3 + yi4 - 4 ti2 y0 yj + 4 tj2 y0 yj y0 yj + 8 xi xj y0 yj - 4 xj2 y0 yj - 8 y02 yi yj + 4 y0 yi2 yj + 2 ti2 yj2 - 2 tj2 yj2 +
yj2 - 4 xi xj yj2 + 2 xj2 yj2 + 4 y02 yj2 + 4 y0 yi yj2 - 2 yi2 yj2 - 4 y0 yj3 + yj4 =
B0 = Coefficient@py0, y0, 0D
9ti4 - 2 ti2 tj2 + tj4 - 2 ti2 xi2 - 2 tj2 xi2 + xi4 + 4 ti2 xi xj + 4 tj2 xi xj - 4 xi3 xj - 2 ti2 xj2 2 tj2 xj2 + 6 xi2 xj2 - 4 xi xj3 + xj4 - 2 ti2 yi2 + 2 tj2 yi2 + 2 xi2 yi2 - 4 xi xj yi2 +
2 xj2 yi2 + yi4 + 2 ti2 yj2 - 2 tj2 yj2 + 2 xi2 yj2 - 4 xi xj yj2 + 2 xj2 yj2 - 2 yi2 yj2 + yj4 =
B1 = Coefficient@py0, y0, 1D
94 ti2 yi - 4 tj2 yi - 4 xi2 yi + 8 xi xj yi - 4 xj2 yi - 4 yi3 -
4 ti2 yj + 4 tj2 yj - 4 xi2 yj + 8 xi xj yj - 4 xj2 yj + 4 yi2 yj + 4 yi yj2 - 4 yj3 =
B2 = Coefficient@py0, y0, 2D
94 xi2 - 8 xi xj + 4 xj2 + 4 yi2 - 8 yi yj + 4 yj2 =
Y0@8xi_, yi_, ti_, xj_, yj_, tj_<D = sol . 8a0 ® B0, a1 ® B1, a2 ® B2<  Flatten;
y0c = Map@Y0@ðD &, datacD
886058.98, 7133.03<, 86058.96, 6484.69<, 86058.97, 6260.82<,
86058.98, 6418.33<, 85953.76, 6058.92<, 84647.21, 6058.97<<
In order to decide, which combination is the admissible solution, we should define the least square objective,
obj = Apply@Plus, Map@ð ^ 2 &, 8f1, f2, f3, f4<DD
I- 1.4557 ´ 106 + H47 863.9 - x0L2 + H5077.24 - y0L2 M +
2
I- 1.75267 ´ 106 + H49 830.9 - x0L2 + H5670.69 - y0L2 M +
2
I- 373 349. + H48 177.6 - x0L2 + H6531.28 - y0L2 M +
2
I- 2.33932 ´ 106 + H49 600.2 - x0L2 + H7185.19 - y0L2 M
For example let us consider the first pair. The x0 values
2
5
6
GaussJacobi_07.nb
x0c@@1DD
848 071.6, 48 565.3<
The y0 values,
y0c@@1DD
86058.98, 7133.03<
The possible combinations,
xyc = Tuples@8x0c@@1DD, y0c@@1DD<D
8848 071.6, 6058.98<, 848 071.6, 7133.03<, 848 565.3, 6058.98<, 848 565.3, 7133.03<<
Let us compute the values of the objective function,
objc = Map@obj . 8x0 ® ð@@1DD, y0 ® ð@@2DD< &, xycD
94.05307 ´ 1012 , 2.00352 ´ 1013 , 1392.99, 1.62156 ´ 1013 =
This means that the third combination is the best, since it results smallest residual. In general way,
xyc@@First@Flatten@Position@objc, Min@objcDDDDDD
848 565.3, 6058.98<
Let us generalize this selection technique for all points. The combinations,
xyc = MapThread@Tuples@8ð1, ð2<D &, 8x0c, y0c<D
88848 071.6,
8848 565.3,
8847 629.7,
8848 565.3,
8848 565.3,
8848 565.3,
6058.98<,
6058.96<,
6058.97<,
6058.98<,
5953.76<,
4647.21<,
848 071.6,
848 565.3,
847 629.7,
848 565.3,
848 565.3,
848 565.3,
7133.03<,
6484.69<,
6260.82<,
6418.33<,
6058.92<,
6058.97<,
848 565.3, 6058.98<, 848 565.3, 7133.03<<,
848 786.9, 6058.96<, 848 786.9, 6484.69<<,
848 565.3, 6058.97<, 848 565.3, 6260.82<<,
850 923.5, 6058.98<, 850 923.5, 6418.33<<,
848 693., 5953.76<, 848 693., 6058.92<<,
848 991.2, 4647.21<, 848 991.2, 6058.97<<<
The objective functions,
objc = Map@Map@obj . 8x0 ® ð@@1DD, y0 ® ð@@2DD< &, ððD &, xycD
994.05307 ´ 1012 , 2.00352 ´ 1013 , 1392.99, 1.62156 ´ 1013 =,
93704.25, 1.94999 ´ 1012 , 6.08017 ´ 1011 , 3.30569 ´ 1012 =,
91.86388 ´ 1013 , 1.75892 ´ 1013 , 1192.93, 4.23786 ´ 1011 =,
9998.92, 1.3704 ´ 1012 , 1.33897 ´ 1014 , 1.45707 ´ 1014 =,
91.1688 ´ 1011 , 62 645.2, 1.93632 ´ 1011 , 2.07274 ´ 1011 =,
93.92375 ´ 1013 , 1197.09, 3.47389 ´ 1013 , 2.16275 ´ 1012 ==
since the positions of the minimums are
Map@First@Flatten@Position@ð, Min@ðDDDD &, objcD
83, 1, 3, 1, 2, 2<
The proper solutions of the combinatorial subsets,
x0y0 = MapThread@xyc@@ð1DD@@ð2DD &,
8Range@mnD, Map@First@Flatten@Position@ð, Min@ðDDDD &, objcD<D
8848 565.3, 6058.98<, 848 565.3, 6058.96<, 848 565.3, 6058.97<,
848 565.3, 6058.98<, 848 565.3, 6058.92<, 848 565.3, 6058.97<<
Let us compute the arithmetical average,
GaussJacobi_07.nb
Let us compute the arithmetical average,
8x0a, y0a< = Map@Mean@ðD &, Transpose@x0y0DD
848 565.3, 6058.97<
NumberForm@%, 12D
848565.2813311 , 6058.96502123 <
7- 2- 2 Weighted Combinatorial Solution
As we have seen in the previous chapter, the solution with high precision is,
{x0 ® 48565.269940218279911, y0 ® 6058.9781636967393724}
therefore one may improve this combinatorial solution by employing weighting. The weighting for nonlinear system can be
carried out like in case of the linear system. Let us consider the j-th equation of a linear model,
a j1 x1 + ... + a jn xn + b j = 0
For example in our case n = 2, then the j-th observation equation is,
a j1 x1 + a j2 x2 + b j = 0
The linearized form of the j-th nonlinear model equation at (Η1 ,...,Ηn ),
f j HΗL +
¶ fj
¶ x1
HΗL Hx1 - Η1 L + ... +
¶ fj
¶ xn
HΗL Hxn - Ηn L = 0
In our case j = 1,..., m, where m = 4.
For example, in case of n = 2, the j-th equation,
f j HΗL +
¶ fj
¶ x1
HΗL Hx1 - Η1 L +
¶ fj
¶ x2
HΗL Hx2 - Η2 L = 0
or
¶ fj
¶ x1
HΗL x1 +
¶ fj
¶ x2
HΗL x2 + f j HΗL -
¶ fj
¶ x1
HΗL Η1 -
¶ fj
¶ x2
HΗL Η2 = 0
As we have seen in case of the linear model (n = 2 and m = 4) the weight of the combinatorial solution {1, 2} was
Π12 = Kdet K
a11 a12 2
OO = Ha11 a22 - a21 a12 L2
a21 a22
In similar way the weight of the nonlinear model for the solution (x12 ),
Π12 HΗL = det
where Η = x12 .
In our case, the partial derivatives,
¶f1
¶x1
¶f2
¶x1
HΗL
HΗL
¶f1
¶x2
¶f2
¶x2
HΗL
HΗL
2
=
¶ f1
¶ x1
HΗL
¶ f2
¶ x2
HΗL -
¶ f2
¶ x1
HΗL
¶ f1
¶ x2
HΗL
2
7
8
GaussJacobi_07.nb
f1x = D@f1, x0D
- 2 H48 177.6 - x0L
f1y = D@f1, y0D
- 2 H6531.28 - y0L
f2x = D@f2, x0D
- 2 H49 600.2 - x0L
f2y = D@f2, y0D
- 2 H7185.19 - y0L
f3x = D@f3, x0D
- 2 H49 830.9 - x0L
f3y = D@f3, y0D
- 2 H5670.69 - y0L
f4x = D@f4, x0D
- 2 H47 863.9 - x0L
f4y = D@f4, y0D
- 2 H5077.24 - y0L
The selected combinatorial solutions of the subsets in rule form,
x0y0s = Map@8x0 ® ð@@1DD, y0 ® ð@@2DD< &, x0y0D
88x0 ® 48 565.3, y0 ® 6058.98<, 8x0 ® 48 565.3, y0 ® 6058.96<, 8x0 ® 48 565.3, y0 ® 6058.97<,
8x0 ® 48 565.3, y0 ® 6058.98<, 8x0 ® 48 565.3, y0 ® 6058.92<, 8x0 ® 48 565.3, y0 ® 6058.97<<
Then the weights,
Π12 = Hf1x f2y - f2x f1yL . x0y0s@@1DD
- 3.70144 ´ 106
Π13 = Hf1x f3y - f3x f1yL . x0y0s@@2DD
- 1.78912 ´ 106
Π14 = Hf1x f4y - f4x f1yL . x0y0s@@3DD
2.84731 ´ 106
Π23 = Hf2x f3y - f3x f2yL . x0y0s@@4DD
- 7.30893 ´ 106
Π24 = Hf2x f4y - f4x f2yL . x0y0s@@5DD
- 903 410.
Π34 = Hf3x f4y - f4x f3yL . x0y0s@@6DD
- 6.05948 ´ 106
Employing these weights, the weighted solutions, {x0C, y0C} are,
Πw = 8Π12 , Π13 , Π14 , Π23 , Π24 , Π34 <;
GaussJacobi_07.nb
xd = Transpose@x0y0D@@1DD
848 565.3, 48 565.3, 48 565.3, 48 565.3, 48 565.3, 48 565.3<
yd = Transpose@x0y0D@@2DD
86058.98, 6058.96, 6058.97, 6058.98, 6058.92, 6058.97<
Πw .xd
x0C =
Apply@Plus, Πw D
48 565.3
NumberForm@%, 12D
48565.2733593
Πw .yd
y0C =
Apply@Plus, Πw D
6058.98
NumberForm@%, 12D
6058.97580576
7- 2- 3 Additional Linearization
This result can be further refined, if we linearize our nonlinear system at this point {x0C, y0C}, and solve it as a linear least
square problem. The linearized equations
q1 = Series@f1, 8x0, x0C, 1<, 8y0, y0C, 1<D  Normal  Expand
- 3.19296 ´ 107 + 775.307 x0 - 944.608 y0
q2 = Series@f2, 8x0, x0C, 1<, 8y0, y0C, 1<D  Normal  Expand
1.14166 ´ 108 - 2069.75 x0 - 2252.43 y0
q3 = Series@f3, 8x0, x0C, 1<, 8y0, y0C, 1<D  Normal  Expand
1.18229 ´ 108 - 2531.31 x0 + 776.572 y0
q4 = Series@f4, 8x0, x0C, 1<, 8y0, y0C, 1<D  Normal  Expand
- 8.00204 ´ 107 + 1402.73 x0 + 1963.47 y0
In matrix form,
Clear@bD
8b, A< = CoefficientArrays @8q1, q2, q3, q4<, 8x0, y0<D; b = - b;
where
MatrixForm@AD
775.307 - 944.608
- 2069.75 - 2252.43
- 2531.31 776.572
1402.73
1963.47
and
9
10
GaussJacobi_07.nb
MatrixForm@bD
3.19296 ´ 107
- 1.14166 ´ 108
- 1.18229 ´ 108
8.00204 ´ 107
The pseudo inverse solution,
PseudoInverse @AD.b
848 565.3, 6058.98<
NumberForm@%, 12D
848565.2699402 , 6058.97816371 <
Here we summarized the results of the different approaches,
Table 7.2 Results of Gauss-Jacobi solution and its improvement
Method
Average
Weighted
Additional linearized
Exact
x0
48 565.2813311
48 565.2733593
48 565.2699402
48 565.2699402
y0
6058.9650219
6058.9758061
6058.9781637
6058.9781637
The improvement can be clearly seen.
7- 3 Examples
7- 3- 1 GPS 4 - Point Problem
This problem has been solved in Section 5- 10- 2 with linear homotopy. Now, we solve the problem in a different way, using
computer algebra, namely Dixon resultant. The reason is, that this result will be utilized in case of N- point problem, in the
next example, using Gauss-Jacobi method.
Employing our equations, ei = 0, i = 1...4, where
Clear@x1, x2, x3, x4, bD
e1 = Hx1 - a0 L2 + Hx2 - b0 L2 + Hx3 - c0 L2 - Hx4 - d0 L2 ;
e2 = Hx1 - a1 L2 + Hx2 - b1 L2 + Hx3 - c1 L2 - Hx4 - d1 L2 ;
e3 = Hx1 - a2 L2 + Hx2 - b2 L2 + Hx3 - c2 L2 - Hx4 - d2 L2 ;
e4 = Hx1 - a3 L2 + Hx2 - b3 L2 + Hx3 - c3 L2 - Hx4 - d3 L2 ;
First, this system of polynomial equations, will be transformed into a system of linear equations and a quadratic equation.
Let us expand and multiply by minus one, and arranged the original equations,
eqsL = 8e1L, e2L, e3L, e4L< = Map@- Sort@Expand@ðDD &, 8e1, e2, e3, e4<D
9- x12 - x22 - x32 + x42 + 2 x1 a0 - a20 + 2 x2 b0 - b20 + 2 x3 c0 - c20 - 2 x4 d0 + d20 ,
- x12 - x22 - x32 + x42 + 2 x1 a1 - a21 + 2 x2 b1 - b21 + 2 x3 c1 - c21 - 2 x4 d1 + d21 ,
- x12 - x22 - x32 + x42 + 2 x1 a2 - a22 + 2 x2 b2 - b22 + 2 x3 c2 - c22 - 2 x4 d2 + d22 ,
- x12 - x22 - x32 + x42 + 2 x1 a3 - a23 + 2 x2 b3 - b23 + 2 x3 c3 - c23 - 2 x4 d3 + d23 =
Subtract the fourth equation from the other three ones, we get
GaussJacobi_07.nb
11
Subtract the fourth equation from the other three ones, we get
q = Table@eqsL@@iDD - eqsL@@4DD, 8i, 1, 3<D  Simplify
92 x1 a0 - a20 - 2 x1 a3 + a23 + 2 x2 b0 - b20 - 2 x2 b3 + b23 + 2 x3 c0 - c20 - 2 x3 c3 +
c23 - 2 x4 d0 + d20 + 2 x4 d3 - d23 , 2 x1 a1 - a21 - 2 x1 a3 + a23 + 2 x2 b1 - b21 - 2 x2 b3 +
b23 + 2 x3 c1 - c21 - 2 x3 c3 + c23 - 2 x4 d1 + d21 + 2 x4 d3 - d23 , 2 x1 a2 - a22 - 2 x1 a3 +
a23 + 2 x2 b2 - b22 - 2 x2 b3 + b23 + 2 x3 c2 - c22 - 2 x3 c3 + c23 - 2 x4 d2 + d22 + 2 x4 d3 - d23 =
This is a system of three linear equations, and they can be written as,
g1 = a0,3 x1 + b0,3 x2 + c0,3 x3 + d3,0 x4 + e0,3 ;
g2 = g1 . 81 ® 2, 0 ® 1<
x1 a1,3 + x2 b1,3 + x3 c1,3 + x4 d3,1 + e1,3
g3 = g1 . 81 ® 3, 0 ® 2<
x1 a2,3 + x2 b2,3 + x3 c2,3 + x4 d3,2 + e2,3
The coefficients 9ai,3 , bi,3 , ci,3 , d3,i , ei,3 =, i = 0 ...
2, can be determined as,
coeffs0 = Table@8Coefficient@q@@i + 1DD, x1D, Coefficient@q@@i + 1DD, x2D,
Coefficient@q@@i + 1DD, x3D, Coefficient@q@@i + 1DD, x4D<  Factor, 8i, 0, 2<D;
which are the coefficients of the variables 8x1 , x2 , x3 , x4 <. The constant part is,
coeffs1 = Table@8q@@iDD - coeffs0@@iDD.8x1, x2, x3, x4<  Simplify<, 8i, 1, 3<D;
Therefore, all of the coefficients are,
coeffs = Table@Union@coeffs0@@iDD, coeffs1@@iDDD, 8i, 1, 3<D
992 Ha0 - a3 L, 2 Hb0 - b3 L, 2 Hc0 - c3 L, - 2 Hd0 - d3 L, - a20 + a23 - b20 + b23 - c20 + c23 + d20 - d23 =,
92 Ha1 - a3 L, 2 Hb1 - b3 L, 2 Hc1 - c3 L, - 2 Hd1 - d3 L, - a21 + a23 - b21 + b23 - c21 + c23 + d21 - d23 =,
92 Ha2 - a3 L, 2 Hb2 - b3 L, 2 Hc2 - c3 L, - 2 Hd2 - d3 L, - a22 + a23 - b22 + b23 - c22 + c23 + d22 - d23 ==
Let us assign these coefficients to the linear system,
coeffsn = Flatten@
Table@Inner@ð1 ® ð2 &, 8ai,3 , bi,3 , ci,3 , d3,i , ei,3 <, coeffs@@i + 1DD, ListD, 8i, 0, 2<DD
9a0,3 ® 2 Ha0 - a3 L, b0,3 ® 2 Hb0 - b3 L, c0,3 ® 2 Hc0 - c3 L, d3,0 ® - 2 Hd0 - d3 L,
e0,3 ® - a20 + a23 - b20 + b23 - c20 + c23 + d20 - d23 , a1,3 ® 2 Ha1 - a3 L, b1,3 ® 2 Hb1 - b3 L, c1,3 ® 2 Hc1 - c3 L,
d3,1 ® - 2 Hd1 - d3 L, e1,3 ® - a21 + a23 - b21 + b23 - c21 + c23 + d21 - d23 , a2,3 ® 2 Ha2 - a3 L, b2,3 ® 2 Hb2 - b3 L,
c2,3 ® 2 Hc2 - c3 L, d3,2 ® - 2 Hd2 - d3 L, e2,3 ® - a22 + a23 - b22 + b23 - c22 + c23 + d22 - d23 =
In addition, we take one of the nonlinear equations, let say the fourth one,
e4 = Hx1 - a3 L2 + Hx2 - b3 L2 + Hx3 - c3 L2 - Hx4 - d3 L2 ;
Now, we shall solve the linear system for the variables 8x1 , x2 , x3 <, with x4 as parameter. It means, the relations x1 = g(x4 ),
x2 = g(x4 ) and x3 = g(x4 ) will be computed. To do that, different methods can be employed. Here we shall employ Dixon
resultant,
<< Resultant‘Dixon‘
Now we can solve the original system, eliminating the variables x2 and x3 to get x1 = g(x4 ),
12
GaussJacobi_07.nb
drx1 = DixonResultant @8g1 , g2 , g3 <, 8x2, x3<, 8u2, u3<D
- x1 a2,3 b1,3 c0,3 + x1 a1,3 b2,3 c0,3 + x1 a2,3 b0,3 c1,3 - x1 a0,3 b2,3 c1,3 x1 a1,3 b0,3 c2,3 + x1 a0,3 b1,3 c2,3 - x4 b2,3 c1,3 d3,0 + x4 b1,3 c2,3 d3,0 +
x4 b2,3 c0,3 d3,1 - x4 b0,3 c2,3 d3,1 - x4 b1,3 c0,3 d3,2 + x4 b0,3 c1,3 d3,2 - b2,3 c1,3 e0,3 +
b1,3 c2,3 e0,3 + b2,3 c0,3 e1,3 - b0,3 c2,3 e1,3 - b1,3 c0,3 e2,3 + b0,3 c1,3 e2,3
This is a linear expression containing only x1 and x4 ,
Exponent@drx1, 8x1, x2, x3, x4<D
81, 0, 0, 1<
Then the solution for x1 = g(x4 )
solx1 = Solve@drx1 Š 0, x1D
88x1 ® H- x4 b2,3 c1,3 d3,0 + x4 b1,3 c2,3 d3,0 + x4 b2,3 c0,3 d3,1 x4 b0,3 c2,3 d3,1 - x4 b1,3 c0,3 d3,2 + x4 b0,3 c1,3 d3,2 - b2,3 c1,3 e0,3 +
b1,3 c2,3 e0,3 + b2,3 c0,3 e1,3 - b0,3 c2,3 e1,3 - b1,3 c0,3 e2,3 + b0,3 c1,3 e2,3 L 
Ha2,3 b1,3 c0,3 - a1,3 b2,3 c0,3 - a2,3 b0,3 c1,3 + a0,3 b2,3 c1,3 + a1,3 b0,3 c2,3 - a0,3 b1,3 c2,3 L<<
Similarly, for the two additional variables, x2 = g(x4 ) and x3 = g(x4 ),
drx2 = DixonResultant @8g1 , g2 , g3 <, 8x1, x3<, 8u1, u3<D
x2 a2,3 b1,3 c0,3 - x2 a1,3 b2,3 c0,3 - x2 a2,3 b0,3 c1,3 + x2 a0,3 b2,3 c1,3 +
x2 a1,3 b0,3 c2,3 - x2 a0,3 b1,3 c2,3 - x4 a2,3 c1,3 d3,0 + x4 a1,3 c2,3 d3,0 +
x4 a2,3 c0,3 d3,1 - x4 a0,3 c2,3 d3,1 - x4 a1,3 c0,3 d3,2 + x4 a0,3 c1,3 d3,2 - a2,3 c1,3 e0,3 +
a1,3 c2,3 e0,3 + a2,3 c0,3 e1,3 - a0,3 c2,3 e1,3 - a1,3 c0,3 e2,3 + a0,3 c1,3 e2,3
Exponent@drx2, 8x1, x2, x3, x4<D
80, 1, 0, 1<
solx2 = Solve@drx2 Š 0, x2D
88x2 ® Hx4 a2,3 c1,3 d3,0 - x4 a1,3 c2,3 d3,0 - x4 a2,3 c0,3 d3,1 +
x4 a0,3 c2,3 d3,1 + x4 a1,3 c0,3 d3,2 - x4 a0,3 c1,3 d3,2 + a2,3 c1,3 e0,3 a1,3 c2,3 e0,3 - a2,3 c0,3 e1,3 + a0,3 c2,3 e1,3 + a1,3 c0,3 e2,3 - a0,3 c1,3 e2,3 L 
Ha2,3 b1,3 c0,3 - a1,3 b2,3 c0,3 - a2,3 b0,3 c1,3 + a0,3 b2,3 c1,3 + a1,3 b0,3 c2,3 - a0,3 b1,3 c2,3 L<<
and
drx3 = DixonResultant @8g1 , g2 , g3 <, 8x1, x2<, 8u1, u2<D
- x3 a2,3 b1,3 c0,3 + x3 a1,3 b2,3 c0,3 + x3 a2,3 b0,3 c1,3 - x3 a0,3 b2,3 c1,3 x3 a1,3 b0,3 c2,3 + x3 a0,3 b1,3 c2,3 - x4 a2,3 b1,3 d3,0 + x4 a1,3 b2,3 d3,0 +
x4 a2,3 b0,3 d3,1 - x4 a0,3 b2,3 d3,1 - x4 a1,3 b0,3 d3,2 + x4 a0,3 b1,3 d3,2 - a2,3 b1,3 e0,3 +
a1,3 b2,3 e0,3 + a2,3 b0,3 e1,3 - a0,3 b2,3 e1,3 - a1,3 b0,3 e2,3 + a0,3 b1,3 e2,3
Exponent@drx3, 8x1, x2, x3, x4<D
80, 0, 1, 1<
solx3 = Solve@drx3 Š 0, x3D
88x3 ® H- x4 a2,3 b1,3 d3,0 + x4 a1,3 b2,3 d3,0 + x4 a2,3 b0,3 d3,1 x4 a0,3 b2,3 d3,1 - x4 a1,3 b0,3 d3,2 + x4 a0,3 b1,3 d3,2 - a2,3 b1,3 e0,3 +
a1,3 b2,3 e0,3 + a2,3 b0,3 e1,3 - a0,3 b2,3 e1,3 - a1,3 b0,3 e2,3 + a0,3 b1,3 e2,3 L 
Ha2,3 b1,3 c0,3 - a1,3 b2,3 c0,3 - a2,3 b0,3 c1,3 + a0,3 b2,3 c1,3 + a1,3 b0,3 c2,3 - a0,3 b1,3 c2,3 L<<
After substitution of these results into the nonlinear equation e4 , we get a quadratic equation for x4 ,
G = e4 . 8solx1@@1, 1DD, solx2@@1, 1DD, solx3@@1, 1DD<;
GaussJacobi_07.nb
13
Exponent@G, x4, ListD
80, 1, 2<
The coefficents of the quadratic equation are,
h2d = CoefficientAG, x42 E;
h1d = Coefficient@G, x4D;
h0d = SimplifyAG - Ih2d x42 + h1d x4ME;
Indeed, this is a quadratic equation,
D@h0d, x4D
0
7- 3- 2 GPS N - Point Problem
Let us now apply Gauss- Jacobi combinatorial solution. First, the subsets should be determined. In our case we have six
satellites, which means six equations, and four unknowns.
For illustration, we have the following numerical values,
datan
a2
b0
b3
c0
c3
d0
d3
= 8a0 ® 14 177 553.47, a1 ® 15 097 199.81,
® 23 460 342.33, a3 ® - 8 206 488.95, a4 ® 1 399 988.07, a5 ® 6 995 655.48,
® - 18 814 768.09, b1 ® - 4 636 088.67, b2 ® - 9 433 518.58,
® - 18 217 989.14, b4 ® - 17 563 734.90, b5 ® - 23 537 808.26,
® 12 243 866.38, c1 ® 21 326 706.55, c2 ® 8 174 941.25,
® 17 605 231.99, c4 ® 19 705 591.18, c5 ® - 9 927 906.48,
® 21 119 278.32, d1 ® 22 527 064.18, d2 ® 23 674 159.88,
® 20 951 647.38, d4 ® 20 155 401.42, d5 ® 24 222 110.91<;
m = 6; n = 4;
The number of the subsets
mn = Binomial@m, nD
15
These subsets are,
qs = Partition@Map@ð - 1 &, Flatten@Subsets@Range@mD, 8n<DDD, nD
880, 1, 2, 3<, 80, 1, 2, 4<, 80, 1, 2, 5<, 80, 1, 3, 4<, 80, 1, 3, 5<,
80, 1, 4, 5<, 80, 2, 3, 4<, 80, 2, 3, 5<, 80, 2, 4, 5<, 80, 3, 4, 5<,
81, 2, 3, 4<, 81, 2, 3, 5<, 81, 2, 4, 5<, 81, 3, 4, 5<, 82, 3, 4, 5<<
The values of the indices start from zero in correspondence with the indices of the coefficients of the equations. Now, we
shall utilize the symbolic solution of the GPS 4-points problem, namely the expressions of the coefficients of the quadratic
equation (h2 , h1 , h0 ). Therefore, we construct a new data list, datap similar to datan,which assignes the proper values to
the coefficients of the equations of the 15 subsets,
datap = Table@Map@Select@datan, MemberQ@qs@@iDD, ð@@1, 2DDD &D .
8ð@@1DD ® 0, ð@@2DD ® 1, ð@@3DD ® 2, ð@@4DD ® 3< &, 8qs@@iDD<D, 8i, 1, mn<D;
For example, the fourth subset is indexed as
qs@@4DD
80, 1, 3, 4<
and it has the proper data assignments,
14
GaussJacobi_07.nb
and it has the proper data assignments,
datap@@4DD
99a0 ® 1.41776 ´ 107 , a1 ® 1.50972 ´ 107 , a2 ® - 8.20649 ´ 106 , a3 ® 1.39999 ´ 106 ,
b0 ® - 1.88148 ´ 107 , b1 ® - 4.63609 ´ 106 , b2 ® - 1.8218 ´ 107 , b3 ® - 1.75637 ´ 107 ,
c0 ® 1.22439 ´ 107 , c1 ® 2.13267 ´ 107 , c2 ® 1.76052 ´ 107 , c3 ® 1.97056 ´ 107 ,
d0 ® 2.11193 ´ 107 , d1 ® 2.25271 ´ 107 , d2 ® 2.09516 ´ 107 , d3 ® 2.01554 ´ 107 ==
Now, we can employ the symbolic expressions of the coefficients of the quadratic equation for x4 , (h2 , h1 , h0 ), which were
developed in the previous example. These coefficients can be evaluated for all of the 15 combinatorial subsets, (H2 , H1 , H0 ),
H2 = Map@Hh2d . coeffsn . Flatten@ðDL &, datapD;
H1 = Map@Hh1d . coeffsn . Flatten@ðDL &, datapD;
H0 = Map@Hh0d . coeffsn . Flatten@ðDL &, datapD;
It is useful to display these coeffients,
H210 = Transpose@8H2, H1, H0<D;
H210c = Map@ð . datan &, H210D; TableForm@SetPrecision @H210c, mnDD
- 0.914220949236445 5.23741229848733 ´ 107
4.90226817815233 ´ 107
- 0.934176403102736 5.03968274998945 ´ 10
7
7.91554182434169 ´ 109
- 0.921130625833683 5.17418260147786 ´ 10
7
3.43282824505136 ´ 108
- 0.865060899130107 5.49504602842167 ´ 107
- 1.02011051146370 ´ 1010
7
2.80298481246631 ´ 108
- 0.919296962706157 5.15622329601199 ´ 107
1.35426736625675 ´ 109
- 0.922335616484969 5.18771660451888 ´ 10
- 0.894980063579044 5.33020056927825 ´ 10
7
- 3.64264414758322 ´ 109
- 0.917233949644576 5.21949461124139 ´ 10
7
1.32408747324945 ´ 108
- 0.925853049262193 5.11408476331213 ´ 107
3369.83293928555
- 1.79271333979785 ´ 10
- 0.877892756651551 5.40238835656926 ´ 10
7
- 0.942581538318523 5.07933615303674 ´ 107
3.72671911237256 ´ 109
9
6.25161518562983 ´ 1012
- 6.51473528821530 ´ 109
7.84684296265915 ´ 108
- 0.908215141659006 5.22466420794924 ´ 10
7
- 2.49905474795832 ´ 109
- 0.883364070549387 5.35665543869961 ´ 10
7
- 5.48141103550455 ´ 109
- 0.866750765656126 5.43806482092251 ´ 107
- 7.32087148965063 ´ 109
This table indicates that the 10-th combination has a poor geometry, which fact can be also detected by computing its PDOP
(Position Dilution of Precision).
Then the 15 quadratic equations can be solved for x4 ,
X4 = Map@x4 . Solve@ð@@1DD x4 ^ 2 + ð@@2DD x4 + ð@@3DD Š 0, x4D@@1, 1DD &, H210cD;
These values of x4 can be substituted into the symbolic relations x1 = g(x4 ), x2 = g(x4 ) and x3 = g(x4 ) developed for GPS 4 points problem,
X1 = MapThread@Hx1 . solx1@@1, 1DD . coeffsn . Flatten@ð1D . x4 ® ð2L &, 8datap, X4<D;
X2 = MapThread@Hx2 . solx2@@1, 1DD . coeffsn . Flatten@ð1D . x4 ® ð2L &, 8datap, X4<D;
X3 = MapThread@Hx3 . solx3@@1, 1DD . coeffsn . Flatten@ð1D . x4 ® ð2L &, 8datap, X4<D;
Let us display these solutions for (x1 , x2 , x3 , x4 ),
GaussJacobi_07.nb
15
X = Transpose@8X1, X2, X3, X4<D; TableForm@SetPrecision @X, 11DD
596 925.34851 - 4.8478173618 ´ 106 4.0882067822 ´ 106 - 0.93600958234
596 790.31236 - 4.8477657637 ´ 106 4.0881157092 ´ 106 - 157.06383064
596 920.41981 - 4.8478154785 ´ 106 4.0882034581 ´ 106 - 6.6345316816
596 972.82610 - 4.8479334365 ´ 106 4.0884120909 ´ 106 185.64239270
596 924.21179 - 4.8478145827 ´ 106 4.0882018667 ´ 106 - 5.4031180901
596 859.97147 - 4.8478297585 ´ 106 4.0882288277 ´ 106 - 26.264702949
596 973.57787 - 4.8477624719 ´ 106 4.0883998670 ´ 106 68.339798476
596 924.23406 - 4.8478186302 ´ 106 4.0882023205 ´ 106 - 2.5368115361
596 858.76504 - 4.8477645341 ´ 106 4.0882218468 ´ 106 - 72.871576603
596 951.52753 - 4.8527795710 ´ 106 4.0887586427 ´ 106 3510.4002371
597 004.75624 - 4.8479652225 ´ 106 4.0883006135 ´ 106 120.59014689
596 915.86575 - 4.8477997045 ´ 106 4.0881955770 ´ 106 - 15.448555631
596 948.56186 - 4.8479129549 ´ 106 4.0882521599 ´ 106 47.831912758
597 013.71941 - 4.8479741452 ´ 106 4.0882693206 ´ 106 102.32915572
597 013.13002 - 4.8480196765 ´ 106 4.0882739565 ´ 106 134.62302196
In order to compute the weights of these solutions, one has to compute the square of the 15 Jacobi determinants. Each has
size of 4 4, because of four equations and four variables in each subset. Starting with the general form of the i-th equation,
e = Hx1 - ai L2 + Hx2 - bi L2 + Hx3 - ci L2 - Hx4 - di L2 ;
The partial derivatives are,
de = 8D@e, x1D, D@e, x2D, D@e, x3D, D@e, x4D<
82 Hx1 - ai L, 2 Hx2 - bi L, 2 Hx3 - ci L, - 2 Hx4 - di L<
The numerical values of these partial derivatives will be computed at the corresponding combinatorial solutions, see the table
above (variable X). Therefore, the weights, Π j , the square of the 15 Jacobi determinants are,
Πs =
TableAMapAHDet@8Hde . i ® ð@@1DDL, Hde . i ® ð@@2DDL, Hde . i ® ð@@3DDL, Hde . i ® ð@@
4DDL< . datanDL2 &, qsE@@jDD .
8x1 ® X@@j, 1DD, x2 ® X@@j, 2DD, x3 ® X@@j, 3DD, x4 ® X@@j, 4DD<, 8j, 1, 15<E;
The sum of these weights are sΠ,
sΠ = âΠ j
15
j=1
sΠs = Apply@Plus, Πs D
2.79722 ´ 1061
Then the weighted solution of the variable xi is
xi =
1
sΠ
âxi HjL
15
j=1
8X1s, X2s, X3s, X4s< = Map@Πs .ð &, 8X1, X2, X3, X4<D  sΠs ;
SetPrecision @8X1s, X2s, X3s, X4s<, 10D
9596 928.9102, - 4.847849314 ´ 106 , 4.088224447 ´ 106 , 13.45201023=
which is very close to the direct numerical solution. This direct numerical solution can be computed as follows:
- the objective to be minimized is the sum of the residiums of the equations
16
GaussJacobi_07.nb
- the objective to be minimized is the sum of the residiums of the equations
f = ApplyAPlus, TableAe2 . datan, 8i, 0, 5<EE  Simplify
JI- 6.99566 ´ 106 + x1M +
2
I2.35378 ´ 107 + x2M + I9.92791 ´ 106 + x3M - 1. I- 2.42221 ´ 107 + x4M N +
2
2 2
2
JI- 2.34603 ´ 107 + x1M + I9.43352 ´ 106 + x2M + I- 8.17494 ´ 106 + x3M 2
2
2
1. I- 2.36742 ´ 107 + x4M N +
2 2
JI- 1.50972 ´ 107 + x1M + I4.63609 ´ 106 + x2M + I- 2.13267 ´ 107 + x3M 2
2
2
1. I- 2.25271 ´ 107 + x4M N +
2 2
JI- 1.41776 ´ 107 + x1M + I1.88148 ´ 107 + x2M + I- 1.22439 ´ 107 + x3M 2
2
2
1. I- 2.11193 ´ 107 + x4M N +
2 2
JI8.20649 ´ 106 + x1M + I1.8218 ´ 107 + x2M + I- 1.76052 ´ 107 + x3M 2
2
2
1. I- 2.09516 ´ 107 + x4M N +
2 2
JI- 1.39999 ´ 106 + x1M + I1.75637 ´ 107 + x2M + I- 1.97056 ´ 107 + x3M 2
2
2
1. I- 2.01554 ´ 107 + x4M N
2 2
To find the global minimum, the built-in function NMinimize is more promising than FindMinimum designed to search
local minimum,
solN = NMinimize@f, 8x1, x2, x3, x4<D
92.21338 ´ 1018 , 9x1 ® 596 929., x2 ® - 4.84785 ´ 106 , x3 ® 4.08822 ´ 106 , x4 ® 13.4526==
or
SetPrecision @solN@@2DD, 10D
9x1 ® 596 928.9104, x2 ® - 4.847849314 ´ 106 , x3 ® 4.088224447 ´ 106 , x4 ® 13.45257586=
However, if we employ the norm of the distance error, instead of the residium of the equations, namely
en = di -
Hx1 - ai L2 + Hx2 - bi L2 + Hx3 - ci L2 - x4
and then the objective is,
2
;
GaussJacobi_07.nb
fn =
/
17
Apply@Plus, Table@en . datan, 8i, 0, 5<DD
2.25271 ´ 107 -
I- 1.50972 ´ 10 + x1M + I4.63609 ´ 10 + x2M + I- 2.13267 ´ 10 + x3M
7
2.01554 ´ 107 -
2
6
2
7
2
2
+ x4
+
I- 1.39999 ´ 106 + x1M + I1.75637 ´ 107 + x2M + I- 1.97056 ´ 107 + x3M
2
2
2
+
2
+
x4
2
2.09516 ´ 107 -
I8.20649 ´ 106 + x1M + I1.8218 ´ 107 + x2M + I- 1.76052 ´ 107 + x3M
2.11193 ´ 107 -
I- 1.41776 ´ 107 + x1M + I1.88148 ´ 107 + x2M + I- 1.22439 ´ 107 + x3M
2
2
2
2
+ x4
2
2
+
+
2
x4
+ 2.36742 ´ 107 -
I- 2.34603 ´ 10 + x1M + I9.43352 ´ 10 + x2M + I- 8.17494 ´ 10 + x3M
7
7
2.42221 ´ 10 -
2
6
2
6
2
2
+ x4
+
I- 6.99566 ´ 10 + x1M + I2.35378 ´ 10 + x2M + I9.92791 ´ 10 + x3M
6
2
7
2
6
2
2
+ x4
The optimum will be somewhat different,
solNn = NMinimize@fn, 8x1, x2, x3, x4<D
936.1119, 9x1 ® 596 930., x2 ® - 4.84785 ´ 106 , x3 ® 4.08823 ´ 106 , x4 ® - 15.5182==
or
SetPrecision @solNn@@2DD, 10D
9x1 ® 596 929.6535, x2 ® - 4.847851553 ´ 106 , x3 ® 4.088226796 ´ 106 , x4 ® - 15.51822409=
We shall deal with problem in Chapter 11, in Part II.
7- 3- 3 Parallel Computing
Similar to linear homotopy, the computation time of Gauss-Jacobi combinatorial method can be also naturally reduced by
parallel computing! The solution of the equations of the different subsets can be achieved independently and therefore
simultaneously. Concerning GPS N -point problem, the computation via parallel technique required more time than in
normal way.
AbsoluteTiming @
X4 = Map@x4 . Solve@ð@@1DD x4 ^ 2 + ð@@2DD x4 + ð@@3DD Š 0, x4D@@1, 1DD &, H210cD;D
80., Null<
18
GaussJacobi_07.nb
AbsoluteTiming @
X4 = ParallelMap@x4 . Solve@ð@@1DD x4 ^ 2 + ð@@2DD x4 + ð@@3DD Š 0, x4D@@1, 1DD &, H210cD;D
81.4062500, Null<
The explanation for this controversial fact is, that the computation time originally is already very short and using parallel
mode, the communication time between the cores dominate and it needs more time than the net computation itself. However,
when the normal computation time is longer, the parallel method can reduce the running time considerably, see e.g. Section
15- 5- 2.
Download