CONDITIONALLY MINIMAL PROJECTIONS by Joanna Kowynia

advertisement
UNIVERSITATIS IAGELLONICAE ACTA MATHEMATICA, FASCICULUS XLIII
2005
CONDITIONALLY MINIMAL PROJECTIONS
by Joanna Kowynia
Abstract. Sets of projections which do not contain the orthogonal projection are considered. In such sets, projections of minimal norm are found.
The research presented in this paper has been motivated by earlier results
concerning oblique projections (see [14]).
1. Introduction. Let (X, k · k) denote a normed space and let S be a
linear subspace of X. A linear bounded operator P : X −→ S is called a
projection if P (s) = s for any s ∈ S. Projections play an important role in
approximation, optimization, spectral theory and orthogonal decomposition.
Recently, applications of projections to signal processing [4], [7], [18], sampling [5], [30], information theory [30], wavelets [1] and least square approximation [16], [17], [31] have been found.
Results connected with projections can be found in papers [8], [9], [28],
[29], [33].
Survey of some results concerning so called oblique projections can be found
in [14].
Among all projections P : X −→ S, we will distinguish a minimal projection. A projection P0 : X −→ S is called minimal if
kP0 k = inf {kP k : P ∈ L(X, S) : P |S = idS }.
Numerous papers have been devoted to minimal projections. Let us mention the following [2], [3], [6], [10], [11], [12], [13], [15], [19], [20], [21], [22],
[23], [24], [25], [26], [32].
If X is a Hilbert space, then the orthogonal projection is minimal. So
seeking minimal projection, we will focus our attention on sets of projections
which do not contain the orthogonal projection.
In this paper we will assume that X = Rn .
Key words and phrases. Projection, operator norm, extreme points.
166
Let us consider the space Rn with the standard inner product given by the
following formula
hx, yi = x1 y1 + x2 y2 + . . . + xn yn ,
where x = (x1 , x2 , . . . , xn ), y = (y1 , y2 , . . . , yn ) ∈ Rn .
By L(Rn ) we will denote a set of all bounded linear operators from Rn to Rn .
Let S ⊂ Rn be a closed linear subspace of Rn . By P (S) we will denote the set
of all projections on Rn with the range S:
P (S) = {Q ∈ L(Rn ) : Q2 = Q, Q(Rn ) = S}.
Let PS denote the orthogonal projection on Rn . Of course PS ∈ P (S).
For a given operator A ∈ L(Rn ) and a subspace S ⊂ Rn , set
P (A, S) = {Q ∈ P (S) : AQ = Q∗ A}.
If we take under consideration non-emptiness of the set P (A, S) only, we
find that the operator A need not be symmetric. In this paper we will assume
that in the bases of subspaces S, S ⊥ the operator A has such matrix representation that:
A = [aij ]ni,j=1 , where aij = aji , i = 1, 2, . . . , k, j = 1, 2, . . . , n, and k = dimS.
This is a much more general situation when compared with that considered
in [14].
2. Preliminary results. In this paper we present three types of results:
− for a fixed S ⊂ Rn , establish the set of bounded operators on Rn such
that, for any operator A from this set, P (A, S) 6= ∅ holds;
− describe nonempty sets P (A, S) for which PS ∈
/ P (A, S);
− for some special P (A, S) such that PS ∈
/ P (A, S) and for some norms
on Rn , we will find Q0 ∈ P (A, S) such that
kQ0 k =
inf
kQk.
Q∈P (A,S)
These results have their origin in earlier research concerning oblique projections
(see [14]).
For a closed k-dimensional subspace S ⊂ Rn we consider
S ⊥ = {y ∈ Rn : hx, yi = 0, x ∈ S}.
Let v1 , v2 , . . . , vk be a basis of the subspace S and vk+1 , vk+2 , . . . , vn a basis of
S ⊥ . For any Q ∈ P (S), we obtain the following matrix representation in these
167
bases:

1
0
.
.
.

Q = 0

0
.
 ..
0
(1)
0 ...
1 ...
.. . .
.
.
0 ...
0 ...
.. . .
.
.
0 r1k+1 r1k+2
0 r2k+1 r2k+2
..
..
..
.
.
.
1 rkk+1 rkk+2
0
0
0
..
..
..
.
.
.
0 ... 0
0
0

. . . r1n
. . . r2n 
.. 
..

.
. 

. . . rkn  ,

... 0 
.. 
..
.
. 
... 0
where r1j , r2j , . . . , rkj ∈ R, j = k +1, k +2, . . . , n. Let us consider a sequence of
variables r = (r1k+1 , . . . , rkk+1 , r1k+2 , . . . , rkk+2 , . . . , r1n , . . . , rkn ) and let A =
[aij ]ni,j=1 . The equation AQ = Q∗ A leads to a system of linear equations which
can be written as follows:
(2)
Cr = b,
where
b = (a1k+1 , . . . , a1n , a2k+1 , . . . , a2n , . . . , akk+1 , . . . , akn , 0, . . . , 0)T
∈R
n(n−1)−k(k−1)
×1
2
and
C T = C1T
(3)
C2T
. . . CkT
T
T
T
. . . Cn−1
Ck+2
Ck+1
,
where



a12 0 . . 0
a11 0 . . 0
a22 0 . . 0 
a12 0 . . 0 




 .
 .
.
. . . 
.
. . . 




a2k 0 . . 0 
a1k 0 . . 0 




 0 a12 0 . 0 
 0 a11 0 . 0 




 0 a22 0 . 0 
 0 a12 0 . 0 




.
. . . 
.
. . . 
 .
 .




C1 =  0 a1k 0 . 0  , C2 =  0 a2k 0 . 0  , . . . ,




0 . . 0 
0 . . 0 
 0
 0




.
. . 0 
.
. . 0 
 .
 .
 0
 0
0 . . 0 
0 . . 0 







 0
0
0
.
.
a
0
.
.
a


12 
11 


 0
0 . . a22 
0 . . a12 

 0

 0
 0
0 . . . 
0 . . . 
0
0 . . a2k
0
0 . . a1k

168

a1k 0 . . 0
a2k 0 . . 0 


 .
.
. . . 


akk 0 . . 0 


 0 a1k 0 . 0 


 0 a2k 0 . 0 


.
. . . 
 .


Ck =  0 akk 0 . 0  ,


0 . . 0 
 0


.
. . 0 
 .
 0
0 . . 0 


 0
0 . . a1k 


 0
0 . . a2k 


 0
0 . . . 
0
0 . . akk

−a1k+2 −a1k+3
 .
.

−akk+2 −akk+3

 a1k+1
0

.
 .

0
 akk+1

a1k+1
 0

.
 .

akk+1
Ck+1 =  0

.
 .

.
 .
 .
.

 .
.

 .
.

 0
.

 .
.
0
.

.
.
.
.
.
.
0
.
0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0
.
0


−a1n
0
.
 0
. 
.


 0
−akn 
.


−a1k+3 −a1k+4
0 


. 
.
 .


0 
−akk+3 −akk+4


0 
0
 a1k+2


. 
.
 .


0  , Ck+2 =  akk+2
0


. 
a1k+2
 0


. 
.
 .
 0
. 
akk+2


 .
. 
.



 .
. 
.

 0
a1k+1 
.




.
.
.
akk+1
0
.
.
.
.
.
.
.
.
.
.
0
.
0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

0
0 

0 

−a1n 

. 

−akn 

0 

. 

0 ,

0 

. 
0 

. 

. 

a1k+2 

. 
akk+2
169

0
 . 


 . 


 . 


 . 


 . 


 . 
=
.
 0 


 −a1n 


 . 
 −a 
 kn 
a

 1n−1 
 . 
akn−1

. . . , Cn−1
There is C1 , C2 , . . . , Ck ∈ Rk(n−k)×(n−k) , Cm ∈ Rk(n−k)×(n−m) , m ∈ {k + 1,
k + 2, . . . , n − 1}. C ∈ R
the number of variables.
n(n−1)−k(k−1)
×k(n−k)
2
, where k = dimS and k(n − k) is
Example 1. Let n = 3, k = dimS = 1. Let v1 and v2 , v3 be base of the
subspaces S and S ⊥ , respectively. Suppose that the matrix representations of
the projection Q and an operator A are as follows:




1 r12 r13
a11 a12 a13
0  , A = a12 a22 a23  .
Q = 0 0
0 0
0
a13 a32 a33
Then in the equation (2) there is


a11
0
a11  and r = (r12 , r13 )T , b = (a12 , a13 , 0)T .
C= 0
−a13 a12
3. Non-emptiness of the set P (A, S).
Theorem 1. Let A = [aij ]ni,i=1 , aij = aji , i = 1, 2, . . . , k, j = 1, 2, . . . , n,
and let S ⊂ Rn be a subspace of dimension k. Then
P (A, S) 6= ∅ if and only if r(C) = r(Cd ),
170
where

C1T
C2T
..
.




 T
 Ck
Cd = 
C T
 k+1
C T
 k+2
 .
 ..
A1
A2
..
.






Ak 
,
Ok+1 

Ok+2 

.. 
. 
T
Cn−1
On−1
and






a1k+1
a2k+1
akk+1






A1 =  ...  , A2 =  ...  , . . . , Ak =  ...  ,
a1n
a2n
akn
 
0
 .. 
Om =  .  ∈ R(n−m)×1 , m ∈ {k + 1, . . . , n − 1}.
0
Proof. It is an obvious consequence of the theory of linear equations.
Corollary. Let the matrix representation of an operator A be such that
aij = 0, i = 1, 2, . . . , k, j = k + 1, k + 2, . . . , n and let dimS = k.
Then P (A, S) 6= ∅.
Note that each operator A considered
a
(4)
A= ∗
b
in this paper has a representation
b
,
c
where a ∈ L(S), a = a∗ ,
Then, for A ∈ L(Rn )

a11
 ..
a= .
(5)
b ∈ L(S ⊥ , S), c ∈ L(S ⊥ ), S ⊂ Rn , dimS = k.
with a matrix representation A = [aij ]ni,j=1 , there is



. . . a1k
a1k+1 . . . a1n
..  , b =  ..
..  ,
..
..
 .
.
.
. 
. 
a1k . . . akk
akk+1 . . . akn


ak+1k+1 . . . ak+1n

..
..  .
..
c=
.
.
. 
ank+1 . . . ann
Theorem 2. Let an operator A has the matrix representation given by
formula (4), where operators a, b, c are given by (5). Let S ⊂ Rn be a kdimensional subspace.
171
Then
P (A, S) 6= ∅ if and only if R(b) ⊂ R(a).
Proof. Let v1 , v2 , . . . , vk be a basis of the subspace S and let vk+1 , vk+2 ,
. . ., vn be a basis of the subspace S ⊥ .
Let us assume that R(b) ⊂ R(a). Hence,
bvk+1 , bvk+2 , . . . , bvn ∈ R(a),
which means that there exist xk+1 , xk+2 , . . . , xn ∈ S such that
bvk+1 = axk+1 , bvk+2 = axk+2 , . . . , bvn = axn .
From the matrix representation of the operator b, (see (5)) we conclude




 
a1k+1
a1k+2
a1n
a2k+1 
a2k+2 
a2n 




 
bvk+1 =  ..  , bvk+2 =  ..  , . . . , bvn =  ..  .
 . 
 . 
 . 
akk+2
akn
akk+1
Let xk+1 , xk+2 , . . . , xn have the following representations in the given basis of S
xk+1 = (x1k+1 , . . . , xkk+1 ), . . . , xn = (x1n , . . . , xkn ).
Then

axk+1



a11 x1k+1 + . . . + a1k xkk+1
a11 x1n + . . . + a1k xkn




..
..
=
 , . . . , axn = 
.
.
.
a1k x1k+1 + . . . + akk xkk+1
a1k x1n + . . . + akk xkn
Hence, we obtain
a11 x1k+1 + . . . + a1k xkk+1 = a1k+1 ,
..
.
a1k x1k+1 + . . . + akk xkk+1 = akk+1 ,
..
.
a11 x1n + . . . + a1k xkn = a1n ,
..
.
a1k x1n + . . . + akk xkn = akn .
Consequently, the system of linear equations given by the matrix C (see formula
(3)) has a solution. This means that P (A, S) 6= ∅.
Now, let us assume that P (A, S) 6= ∅.
Hence, there exist xk+1 , xk+2 , . . . , xn ∈ S such that
bvk+1 = axk+1 , bvk+2 = axk+2 , . . . , bvn = axn .
172
This means that bvk+1 , . . . , bvn ∈ R(a). Since R(a) is a linear space,
γk+1 bvk+1 + . . . + γn bvn = b(γk+1 vk+1 + . . . + γn vn ) ∈ R(a)
for any γk+1 , . . . , γn ∈ R.
From this we conclude that b(S ⊥ ) = R(b) ⊂ R(a). The proof is complete.
Remark. Note that, for the space Rn , Theorem 2 is a generalization of
the result obtained in [14].
In [14], a positive operator A has been considered, whereas an operator A
considered in Theorem 2 need not even be symmetric.
Corollary. Let the assumptions of Theorem 2 hold.
If b ≡ 0 then P (A, S) 6= ∅.
4. The orthogonal projection. In what follows, we will assume that
P (A, S) 6= ∅. If the orthogonal projection PS belongs to P (A, S), then PS is
an element of the minimum norm.
The theorem below characterizes operators A ∈ L(Rn ) such that PS ∈ P (A, S).
Theorem 3. Let an operator A have the matrix representation A = [aij ]ni,j=1 ,
aij = aji , i = 1, 2, . . . , k, j = 1, 2, . . . , n and let dimS = k. Then
PS ∈ P (A, S) if and only if
aij = aji = 0, i = 1, 2, . . . , k, j = k + 1, k + 2, . . . , n.
Proof. It suffices to use the results of the theory of linear homogenous
equations.
Theorem 4. Let the assumptions of Theorem 3 hold. Let P (A, S) 6= ∅.
Then PS ∈
/ P (A, S) if and only if there exist l ∈ {1, 2, . . . , k} and m ∈ {k + 1,
k + 2, . . . , n} such that alm 6= 0.
Additionally:
if r(C) = k(n − k) then #P (A, S) = 1;
if r(C) < k(n − k) then P (A, S) is a non-trivial affine subspace of L(Rn ),
where the matrix C is given by (2) and r(C) denotes the range of C.
Proof. It suffices to use the results of the theory of linear equations.
Example 2. Let us consider the situation in Example 1. This situation
leads to the following system of linear equations


a11 r12 = a12 ,
a11 r13 = a13 ,


−a13 r12 + a12 r13 = 0.
173
If we assume that the solution of such linear system exists, we conclude that
there is only one solution if and only if a11 6= 0 and then P (A, S) has one
element only. The set P (A, S) is infinite if and only if a11 = a12 = a13 = 0. In
the second situation, r12 , r13 may be chosen arbitrarily, so PS ∈ P (A, S).
Example 3. Let us consider the following situation: n = 3, dimS = 2,
and




4 2 2
1 0 r13
A = 2 1 1 , Q = 0 1 r23  .
2 1 5
0 0 0
By (3) there is
(
4r13 + 2r23 = 2,
2r13 + r23 = 1.
Hence, r23 = 1 − 2r13 , r13 ∈ R and thus


1 0
r13
P (A, S) = {Q = 0 1 1 − 2r13  , r13 ∈ R}.
0 0
0
So P (A, S) is not finite and PS ∈
/ P (A, S).
5. Projections of minimum norms. Now we focus our attention on
the case where P (A, S) has more than one element and PS ∈
/ P (A, S). In such
a case, for the special class of operator A, we will find an element Q ∈ P (A, S)
of minimum norm. We will consider different norms in Rn .
Let S ⊂ Rn be a k-dimensional subspace and let the operator A have the
following matrix representation

a11 0 . . a1k
0
 0 a22 0 . 0
0

 .
.
.
.
.
.

 .
.
.
.
.
.
A=
a1k 0 . 0 akk
0

 0
0
.
.
0
a
k+1k+1

 .
.
. .
.
.
a1n 0 . . 0
ank+1
where a1k , a1n 6= 0.
.
.
.
.
.
.
.
.

. a1n
.
0 

.
. 

.
. 
,
.
0 

. ak+1n 

.
. 
. ann
174
Any projection Q admits a representation of form (1).
From the equation AQ = Q∗ A we obtain


a11 r1n + a1k rkn = a1n ,




a1k r1n + akk rkn = 0,



a r
22 2k+1 = . . . = a22 r2n = 0,

...





ak−1k−1 rk−1k+1 = . . . = ak−1k−1 rk−1n = 0,



rkk+1 = . . . = rkn−1 = 0.
Hence
− if there exists l ∈ {2, 3, . . . , k − 1} such that all = 0, then
rlk+1 , . . . , rln ∈ R;
− if a22 , . . . , ak−1k−1 6= 0, then r2k+1 = . . . = rk−1n = 0.
Additionally, from the equations
(
a11 r1n + a1k rkn = a1n ,
a1k r1n + akk rkn = 0,
we obtain:
rkn =
−akk a1n
a1n a1k
, r1n = 2
.
− a11 akk
a1k − a11 akk
a21k
So rkn 6= 0 and r1n = 0 if and only if akk = 0.
Summarizing the above, we obtain the following description of P (A, S) :
− if there exists l ∈ {2, 3, . . . , k − 1} such that all = 0 then
(6)

1 0 . 0
0

.
0 1 . .

.
.
.
.
r

lk+1

.
. . . 0
P (A, S) = Q = 
0
0 . . 1

0 . . .
.

. . . .
.
0 . . .
.
rlk+1 , . . . , rln ∈ R ,
. . 0
.
.
.
.
. .
. .
. .
. 0
. .
. .
. .
.
.
.
−akk a1n
a21k −a11 akk
.




rln


.

a1n a1k  ,

2
a1k −a11 akk 

0


.
0
175
− if a22 , . . . , akk 6= 0 then

1 0 . 0 0 . . 0

0 1 . . . . . .

0 . . . . . . .

. . . . . . . .
(7)
P (A, S) = Q = 
0 . . 1 0 . . 0

0 . . . . . . .

. . . . . . . .
0 . . . . . . .
−akk a1n
a21k −a11 akk

0
.
0






a1n a1k  .

a21k −a11 akk 

0


.
0
In (7), P (A, S) has one element only, which is not interesting while
looking for an element of minimum norm.
From now on, we focus our attention on the case given by (6). In such a
case, the set P (A, S) is not finite.
Example 4. Let us assume that an operator Q has the matrix representation Q = [qij ]ni,j=1 and let us consider the following norm
v
uX
u n
kQk = t
| qij |2 .
i,j=1
For any Q ∈ P (A, S), where P (A, S) is given by (6), there is
2
2
+(
+ . . . + rln
kQk2 = k + rlk+1
−akk a1n 2
a1n a1k
) +( 2
)2 .
a21k − a11 akk
a1k − a11 akk
The above expression attains its minimum if rlk+1 = . . . = rln = 0.
Now, for a projection Q ∈ P (A, S), we will consider different types of
operator norms. These norms will be given by different norms in the space Rn .
For an operator Q : Rn −→ Rn , the operator norm is defined by the formula
(8)
kQkop = sup kQxk.
kxk=1
Let D be a set in Rn . An element d ∈ D will be called an extreme point of
D if for any d1 , d2 ∈ D, if there exists α ∈ (0, 1) such that d = αd1 + (1 − α)d2
then d = d1 = d2 .
Let us denote by E ⊂ Rn the set of extreme points of the unit sphere. It
is well-known (see, e.g., [27]) that
kQkop = sup kQyk.
y∈E
176
Example 5. Consider Rn with the l1 -norm, that is
kxk1 =
(9)
n
X
| xi |, x = (x1 , x2 , . . . , xn ) ∈ Rn .
i=1
Then
E = {(±1, 0, . . . , 0), . . . , (0, . . . , 0, ±1)}, #E = 2n.
Because of the definition of k · k1 , it is enough to consider norms kQei k1 ,
i = 1, 2, . . . , n, ei = (0, 0, . . . , 1, . . . , 0) (a 1 on i-th position).
Because of the matrix representation of any Q ∈ P (A, S), where P (A, S)
is given by (6), we have
Qe1 = (1, 0, . . . , 0), . . . , Qek = (0, . . . , 0, 1, 0 . . . 0), (a 1 on k-th position),
Qek+1 = (r1k+1 , . . . , rkk+1 , 0, . . . , 0), . . . , Qen = (r1n , . . . , rkn , 0, . . . , 0),
where
r1n =
a1n a1k
−akk a1n
, rkn = 2
.
− a11 akk
a1k − a11 akk
a21k
From the above, for any Q ∈ P (A, S), we obtain
kQkop = max{1,
k
X
| rik+1 |, . . . ,
i=1
|
k
X
| rin−1 |,
i=1
−akk a1n
a1n a1k
| + | r2n | + . . . + | rk−1n | + | 2
|}.
− a11 akk
a1k − a11 akk
a21k
So, for any Q ∈ P (A, S), there is kQkop ≥ 1 and kQkop = 1 if, in its matrix
representation, Q has elements such that
k
X
| rik+1 |≤ 1, . . . ,
i=1
|
k
X
| rin−1 |≤ 1,
i=1
−akk a1n
a1n a1k
| + | r2n | + . . . + | rk−1n | + | 2
|≤ 1.
− a11 akk
a1k − a11 akk
a21k
Example 6. Consider Rn with the norm k · kmax , where
kxkmax = max{| x1 |, | x2 |, . . . , | xn |}, x = (x1 , x2 , . . . , xn ) ∈ Rn .
(10)
Then
E = {ei = (ei1 , . . . , ein ), eil ∈ {1, −1}, l = 1, 2, . . . , n, i = 1, 2, .., 2n }.
So there is
kQkop = max{| 1 +
n
X
i=k+1
r1i |, | 1 +
n
X
i=k+1
r2i |, . . . , | 1 +
n
X
i=k+1
rki |},
177
where
rml ≥ 0, m = 1, 2, . . . , k, l = k + 1, . . . , n.
Thus the minimum norm is attained for such an operator Q whose matrix
representation fulfils the following
r1k+1 = . . . = r1n−1 = r2k+1 = . . . = r2n = . . . = rkk+1 = . . . = rkn−1 = 0.
Example 7. Consider
Q : (Rn , k · k1 ) −→ (Rn , k · kmax ),
where k · k1 , k · kmax are given by (9) and (10), respectively.
Then
E = {(±1, 0, . . . , 0), . . . , (0, . . . , 0, ±1)}, #E = 2n,
and
kQkop = max{1, | r1k+1 |, . . . , | rkk+1 |, . . . , | r1n |, . . . , | rkn |}.
So kQk ≥ 1 and kQk = 1 if | rlm |≤ 1, l = 1, 2, . . . , k, m = k + 1, k + 2, . . . , n.
Example 8. We consider
Q : (Rn , k · kmax ) −→ (Rn , k · k1 ),
where k · k1 , k · kmax are given by (9) and (10), respectively.
Then
E = {ei = (ei1 , . . . , ein ), eil ∈ {1, −1}, l = 1, 2, . . . , n, i = 1, 2, .., 2n },
and
kQkop =| 1 + r1k+1 + . . . + r1n | + . . . + | 1 + rkk+1 + . . . + rkn |,
where
rlm ≥ 0, l = 1, 2, . . . , k, m = k + 1, k + 2, . . . , n.
So Q ∈ P (A, S) is of minimum norm if its matrix representation satisfies
r1k+1 = . . . = r1n−1 = . . . = rkk+1 = . . . = rkn−1 = 0.
References
1. Aldroubi A., Oblique and hierarchical multiwavelet bases, Appl. Comput. Harmon. Anal.,
4, No. 3 (1997), 231–263.
2. Baronti M., Papini P.L., Norm one projections onto subspaces of lp , Ann. Mat. Pura
Appl., 4 (1988), 53–61.
3. Baronti M., Lewicki G., Strongly unique minimal projections on hyperplanes, J. Approx.
Theory, 78 (1994), 1–18.
4. Behrens R.T., Scharf L.L., Signal processing applications of oblique projection operators,
IEEE Trans. Signal Processing, 42 (1994), 1413–1424.
5. Benedetto J., Frames, sampling and seizure prediction. Advances in wavelets (Hong
Kong 1997), Springer, Singapore, 1999, 1–25.
178
6. Blatter J., Cheney E.W., Minimal projections onto hyperplanes in sequence spaces, Ann.
Mat. Pura Appl., 101 (1974), 215–227.
7. Blu T., Unser M., Quantitative Fourier analysis of approximation techniques. I. Interpolators and projections, Signal Processing, IEEE Trans. Signal Processing, 47 (1999),
2783–2795.
8. Chalmers B.L., Lewicki G., Symmetric spaces with maximal projection constans, J.
Funct. Anal., 200 (2003), 1–22.
9. Chalmers B.L., Lewicki G., Symmetric subspaces of l1 with large projection constants,
Studia Math., 134, No. 2 (1999), 119–133.
10. Chalmers B.L., Metcalf F.T., The determination of minimal projections and extensions
in L1 , Trans. Amer. Math. Soc., 329 (1992), 289–305.
11. Cheney E.W., Light W.A., Approximation Theory in Tensor Product Spaces, Lecture
Notes in Math., Dold A. and Eckman B. (eds.), Vol. 1169, Springer-Verlag, Berlin, 1985.
12. Cheney E.W., Morris P.D., On the existence and characterization of minimal projections,
J. Reine Angew. Math., 270 (1974), 61–76.
13. Cheney E.W., Price K.H., Minimal projections, in: Approximation Theory, Talbot A.
(ed.), Proc. Symp. Lancaster, July 1969, Academic Press, London/New York, 1970, 261–
289.
14. Corach G., Maestripieri A., Stojanoff D., A Classification of Projectors, Banach Center
Publ., Polish Acad. Sci. (to appear).
15. Franchetti C., Projections onto hyperplanes in Banach spaces, J. Approx. Theory, 38
(1983), 319–333.
16. Forsgren A., On linear least-squares problems with diagonally dominant weight matrices,
SIAM J. Matrix Anal. Appl., 17 (1996), 763–788.
17. Forsgren A., Sporre G., On weighted linear least-squares problems related to interior
methods for convex quadratic programming, SIAM J. Matrix Anal. Appl., 23 (2001),
42–56.
18. Kayalar S., Weinert H.L., Oblique projections: formulas, algorithms and error bounds,
Math. Control Signal Systems, 2 (1989), 33–45.
19. Kilgore T.A., A characterization of Lagrange interpolating projection with minimal
Tchebycheff norm, J. Approx. Theory, 24 (1978), 273–288.
20. Lewicki G., Kolmogorov’s type criteria for spaces of compact operators, J. Approx. Theory, 64 (1991), 181–202.
(4)
21. Lewicki G., Minimal Projections onto Two Dimensional Subspaces of l∞ , J. Approx.
Theory, 88, No. 1 (1997), 92–108.
(n)
22. Lewicki G., Minimal projections onto subspaces of l∞ of codimension two, Collect. Math.,
44 (1993), 167–179.
23. Lewicki G., Best approximation in spaces of bounded, linear operators, Dissertationes
Math., 330 (1994).
24. Odinec V.P. (Odyniec W.), Codimension one minimal projections in Banach spaces and
a mathematical programming problem, Dissertationes Math., 254 (1986).
25. Odyniec W., Lewicki G., Minimal Projections in Banach Spaces, Lecture Notes in Math.,
Vol. 1449, Springer-Verlag, Berlin/Heidelberg/New York, 1990.
26. Rolewicz S., On projections on subspaces of codimension one, Studia Math., 44 (1990),
17–19.
27. Rudin W., Functional Analysis, McGraw-Hill Book Company, New York, 1973.
28. Steinberg J., A class of integral transforms which are projections, Integral Equations
Operator Theory, 34, No. 1 (1999), 65–90.
179
29. Steinberg J., Oblique projections in Hilbert spaces, Integral Equations Operator Theory,
38, No. 1 (2000), 81–119.
30. Unser M., Quasi-orthogonality and quasi-projections, Applied Comp. Harmonic Anal., 3
(1996), 201–214.
31. Wei M.S., Upper bound and stability of scaled pseudoinverses, Numer. Math., 72 (1995),
285–293.
32. Wojtaszczyk P., Banach spaces for analysts, Cambridge Studies in Advanced Math.,
Vol. 25, Cambridge Univ. Press, Cambridge, 1991.
33. Wulbert D.E., Some complemented function spaces in C(X), Pacific J. Math., 24, No.
3 (1968), 586–602.
Received
October 25, 2004
AGH University of Science and Technology
Department of Mathematics
al. Mickiewicza 30
30-059 Kraków
Poland
e-mail : kowynia@wms.mat.agh.edu.pl
Download