单击此处编辑母版标题样式
Class-oriented
Regression Embedding
报告人:陈 燚
2011年8月25日
单击此处编辑母版标题样式
报告提纲
1. Background
2. Related Works
2.1 Linear Regression-based Classification
2.2 Neighborhood Preserving Embedding &
Sparsity Preserving Projections
3. Class-oriented Regression Embedding
4. Experiments
单击此处编辑母版标题样式
1. Background
单击此处编辑母版标题样式
Background
• The minimum reconstruction error
criterion is widely used in the recent progress
of subspace classification, such as in SRC and
LRC
J. Wright, A. Yang, S. Sastry, Y. Ma, Robust face recognition via sparse
representation, IEEE Trans. Pattern Anal. Mach. Intell. 31 (2), 210–227, 2009.
I. Naseem, R. Togneri, and M. Bennamoun. Linear Regression for Face
Recognition. IEEE Trans. on PAMI, 2010.
单击此处编辑母版标题样式
A brief review
• SRC:
y  X
• LRC
y  X i i
Classification rule:
• i is the coefficients of the ith class
min y  X i i
i
单击此处编辑母版标题样式
Nearest Space Classifiers
• Definition: The nearest subspace of a given
sample
• Measurement: Reconstruction Error
Stan Z. Li: Face Recognition Based on Nearest Linear
Combinations. CVPR 1998: 839-844
单击此处编辑母版标题样式
2.Related Works
单击此处编辑母版标题样式
LRC
线性子空间假设
Xi  [x1i , xi2 ,..., xipi ]
y  Xi βi
最小二乘法

yi  Xi  X Xi  X y
T
i
1
第i类的重
构结果
T
i

di  y   y  y i

βi   X Xi  XTi y
T
i
1
2
样本的类别即是最小重
构误差的类
min di  y  , i  1, 2,..., c
i
单击此处编辑母版标题样式
NPE & SPP
Objective Function
2
min  xi   Wij x j
i
j
s.t.  Wij  1
j
min aT XMXT a
a
s.t. aT XXT a  1
M  I  W
T
I  W 
The difference between NPE and SPP the
reconstructive strategy.
NPE: KNN
SPP: Global Sparse
Xiaofei He, Deng Cai, Shuicheng Yan, and HongJiang Zhang.
Neighborhood preserving embedding, ICCV, 1208–1213, 2005.
Qiao, L.S., Chen, S.C., Tan, X.Y., Sparsity preserving projections with
applications to face recognition. Pattern Recognition 43 (1), 331–341,
2010.
单击此处编辑母版标题样式
3. Class-oriented
Regression Embedding
单击此处编辑母版标题样式
Assumption of SRC and LRC
• A given sample belongs to the class with
minimum reconstruction error
Problem:
Does this assumption holds well in real world
applications?
单击此处编辑母版标题样式
Examples
• The training face images
单击此处编辑母版标题样式
Examples
1400
1600
1200
1400
1200
1000
1000
800
800
600
20
600
400
200
0
3
400
200
0
5
10
15
20
25
30
35
40
1200
0
0
5
10
15
20
25
30
35
40
900
800
1000
700
800
17
600
500
600
400
400
14
300
200
200
100
0
0
5
10
15
20
25
30
35
40
0
0
5
10
15
20
25
30
35
40
单击此处编辑母版标题样式
Motivation
• LRC uses downsampled images directly for
classification, which is not optimal for LRC.
• We aim to find the subspace that conforms to
the assumption. In this low-dimensional
subspace, A sample can be best represented
by its intra-class samples.
单击此处编辑母版标题样式
Algorithm
• Objective function:
min  ε   y  Yi β
j
i
i
j
j
i
i
j 2
i
j
• To avoid degenerate solutions, we constraint
• Then we have:
T
X  I  β  βT  ββT  XT a   XXT a
Where
 W1

β


 0
a XX a  1
T
W2
0 




Wc 
and
Wi  β1i , βi2 ,..., βini 
单击此处编辑母版标题样式
Example
• Reconstructive Strategy of CRE NPE and SPP
CRE
NPE
SPP
单击此处编辑母版标题样式
SSS problem
X  I  β  βT  ββT  XT a   XXT a
XXT is singular in SSS case. We apply
PCA to reduce the dimensionality of the
origin sample to avoid SSS problem.
单击此处编辑母版标题样式
Ridge
Regression-based Classification
线性子空间假设
Xi  [x1i , xi2 ,..., xipi ]
y  Xi βi
最小二乘法

yi  Xi  X Xi  X y
T
i
1
第i类的重
构结果
T
i

di  y   y  y i

βi   X Xi  XTi y
T
i
1
2
样本的类别即是最小重
构误差的类
min di  y  , i  1, 2,..., c
i
May be singular
• Solution: Ridge Regression
min J  βi    y i  Xi βi 
2
min J  βi    y i  Xi βi    βi
2

βi   X Xi  X y
T
i
1
T
i

2
βi   Xi Xi  Il  XiT yi
T
1
单击此处编辑母版标题样式
Steps
• Input: Column sample matrix X  [ X1 , X2 ,..., Xc ]
• Output: Transform matrix PCRE
Step 1: Project the training samples onto a PCA
T
subspace: X  PPCA
X
Step 2: Construct the global reconstruction coefficient
matrix β using X .
Step 3: Solve the generalized eigenvectors of
X  I  β  βT  ββT  XT φ   XXT φ corresponding to the
first d smallest eigenvalues.
单击此处编辑母版标题样式
4. Experiments
单击此处编辑母版标题样式
Experiments on YALE-B
Experiments on the YALE-B database
Method
5 Train
10 Train
20 Train
PCA+NNC
36.1(176)
52.7(362)
68.9(727)
LDA+NNC
73.4(37)
87.0(37)
91.3(37)
NPE+NNC
65.7(77)
79.0(93)
82.7(152)
SPP+NNC
60.2(51)
76.5(72)
84.4(91)
CRE+NNC
66.3(43)
58.6(112)
54.3(161)
Method
5 Train
10 Train
20 Train
PCA+ SRC
72.4(91)
85.8(153)
92.6(192)
LDA+ SRC
72.7(37)
84.6(35)
91.7(37)
NPE+ SRC
68.8(51)
81.5(80)
90.2(102)
SPP+ SRC
69.2(51)
83.4(63)
92.0(82)
CRE+ SRC
78.6(65)
89.5(79)
93.4(90)
Method
5 Train
10 Train
20 Train
PCA+LRC
59.8(101)
82.7(148)
85.6(190)
LDA+LRC
65.3(37)
84.1(37)
87.4(37)
NPE+LRC
70.4(112)
82.7(205)
85.3(240)
SPP+LRC
72.5(51)
86.0(72)
91.3(91)
CRE+LRC
80.7(43)
92.4(83)
97.2(161)
LRC
58.0
81.7
90.9
1
1
0.9
0.9
0.8
Recognition rates
Recognition rates
0.8
0.7
0.6
0.5
0.4
0.7
0.6
0.5
0.3
CRE+LRC
CRE+NNC
CRE+SRC
0.2
0.1
20
30
40
50
60
70
Dimensions
80
90
100
CRE+LRC
CRE+NNC
CRE+SRC
0.4
110
30
40
50
60
70
Dimensions
80
90
Comparisons of recognition rates using CRE plus NNC/LRC/SRC on
the YALE-B database with 10 and 20 training samples each class
respectively.
100
1
0.9
0.9
0.8
0.8
0.7
0.7
0.6
0.5
0.4
CRE+LRC
PCA+LRC
LDA+LRC
NPE+LRC
SPP+LRC
0.3
0.2
0.1
20
30
40
50
Dimensions
60
70
Recognition rates
Recognition rates
1
0.6
0.5
0.4
CRE+LRC
PCA+LRC
LDA+LRC
SPP+LRC
NPE+LRC
0.3
0.2
0.1
30
40
50
60
Dimensions
70
80
Comparisons of recognition rates using 5 methods plus LRC on the
YALE-B database with 10 and 20 training samples each class
respectively.
90
1
Recognition Rates
0.9
0.8
0.7
0.6
CRE+LRC
SPP+SRC
LRC
0.5
0.4
30
40
50
60
Dimensions
70
80
90
The recognition rates of CRE plus LRC, SPP plus SRC and direct
LRC on the YALE-B databases with 20 training samples of each
class.
单击此处编辑母版标题样式
Experiments on FERET
Method
3 Train
4 Train
5 Train
6 Train
PCA+NNC
29.4(203)
33.0(242)
38.6(253)
42.6(286)
LDA+NNC
61.9(33)
65.8(199)
69.9(199)
75.3(199)
NPE+NNC
58.6(22)
62.3(51)
66.2(46)
70.1(72)
SPP+NNC
36.9(146)
43.2(151)
48.6(176)
50.2(181)
CRE+NNC
64.2(53)
69.4(55)
73.0(71)
77.6(82)
Method
3 Train
4 Train
5 Train
6 Train
PCA+ SRC
53.8(122)
62.8(118)
68.7(121)
73.4(134)
LDA+ SRC
66.7(33)
74.6(26)
80.1(36)
86.4(38)
NPE+ SRC
64.3(42)
70.7(54)
76.4(60)
82.6(68)
SPP+ SRC
52.9(151)
63.7(172)
69.8(185)
74.6(198)
CRE+ SRC
75.6(32)
81.4(37)
86.3(43)
91.6(46)
Method
3 Train
4 Train
5 Train
6 Train
PCA+LRC
40.7(298)
48.6(312)
52.0(335)
54.5(352)
LDA+LRC
65.4(39)
73.4(30)
78.6(51)
84.1(62)
NPE+LRC
61.3(40)
68.7(65)
72.4(92)
77.4(77)
SPP+LRC
50.2(146)
58.7(151)
64.2(176)
68.0(181)
CRE+LRC
85.4(53)
90.2(55)
94.1(71)
97.9(82)
LRC
42.0
50.6
55.4
61.2
1
1
0.9
0.9
0.8
Recognition rates
Recognition rates
0.8
0.7
0.6
0.7
0.6
0.5
0.4
0.5
CRE+LRC
CRE+NNC
CRE+SRC
0.4
10
20
30
40
Dimensions
50
60
0.3
CRE+LRC
CRE+NNC
CRE+SRC
0.2
0.1
10
20
30
40
Dimensions
50
Comparisons of recognition rates using CRE plus NNC/LRC/SRC on
the FERET database with 5 and 6 training samples each class
respectively.
60
1
0.9
0.8
Recognition rates
0.7
0.6
0.5
0.4
CRE+LRC
PCA+LRC
LDA+LRC
SPP+LRC
NPE+LRC
0.3
0.2
0.1
0
10
20
30
40
50
Dimensions
60
70
80
A comparison of recognition rates using 5 methods plus LRC
on the FERET database with 6 training samples each class
respectively.
1
0.9
0.8
Recognition rates
0.7
0.6
0.5
0.4
0.3
0.2
CRE+LRC
SPP+SRC
LRC
0.1
0
10
20
30
40
50
Dimensions
60
70
80
The recognition rates of CRE plus LRC, SPP plus SRC and direct LRC
on the FERET databases with 6 training samples of each class.
单击此处编辑母版标题样式
Experiments on Cenparmi
Method
PCA
LDA
NPE
SPP
CRE
NNC
87.6(30)
88.2(9)
85.8(19)
86.9(33)
87.6(33)
SRC
90.0(41)
82.6(9)
89.6(21)
92.1(31)
93.6(31)
RRC
92.1(32)
84.8(9)
92.4(23)
88.1(33)
95.6(38)
Classifier
1
0.9
0.9
0.8
0.8
0.7
0.7
Recognition rates
Recognition rates
1
0.6
0.5
0.4
CRE+RRC
PCA+RRC
LDA+RRC
NPE+RRC
SPP+RRC
0.3
0.2
0.1
0
5
10
15
20
25
Dimensions
30
35
0.6
0.5
0.4
0.3
CRE+RRC
CRE+SRC
CRE+NNC
0.2
40
The recognition rate curves of
PCA, LDA, NPE, SPP and LSPP
plus RRC on the CENPARMI
handwritten numeral database.
0.1
0
5
10
15
20
25
Dimensions
30
35
40
The recognition rate curves of
CRE plus RRC/SRC/NNC versus
the dimensions on the
CENPARMI handwritten numeral
database.
单击此处编辑母版标题样式
Comparisons
800
1400
700
1200
600
1000
500
800
400
600
300
400
200
200
100
0
0
0
5
10
15
20
25
30
35
40
1600
0
5
10
15
20
25
30
35
40
0
5
10
15
20
25
30
35
40
900
1400
800
1200
700
1000
600
500
800
400
600
300
400
200
200
100
0
0
5
10
15
20
25
30
35
40
0
单击此处编辑母版标题样式
1200
500
450
1000
400
350
800
300
600
250
200
400
150
100
200
50
0
0
5
10
15
20
25
30
35
40
900
0
0
5
10
15
20
25
30
35
40
0
5
10
15
20
25
30
35
40
400
800
350
700
300
600
250
500
200
400
150
300
200
100
100
50
0
0
5
10
15
20
25
30
35
40
0
单击此处编辑母版标题样式
谢谢!
报告人:陈 燚
2011年8月25日