Discriminative Common Images for Face Recognition

更新时间:2023-07-05 00:00:43 阅读: 评论:0

Discriminative Common Images for Face Recognition
Vo Dinh Minh Nhat and Sungyoung Lee
Kyung Hee University – South of Korea
establishes{vdmnhat, sylee}@oslab.khu.ac.kr
Abstract. Linear discrimination analysis (LDA) technique is an important and
well-developed area of image recognition and to date many linear
discrimination methods have been put forward. Basically, in LDA the image
always needs to be transformed into 1D vector, however recently two-
dimensional PCA (2DPCA) technique have been propod. In 2DPCA, PCA
technique is applied directly on the original images without transforming into
1D vector. In this paper, we propo a new LDA-bad method that applies the
idea of two-dimensional PCA. In addition to that, our approach propos an
method called Discriminative Common Images bad on a variation of Fisher’s
LDA for face recognition. Experiment results show our method achieves better
performance in comparison with the other traditional LDA methods.
Index Terms – Fisherfaces, Linear discrimination analysis (LDA),
Discriminative Common Image, face recognition.
1. Introduction
The Fisherface method [4] combines PCA and the Fisher criterion [9] to extract the information that discriminates between the class of a sample t. It is a most reprentative method of LDA. Nevertheless, Martinez et al. demonstrated that when the training data t is small, the Eigenface method outperforms the Fisherface method [7]. Should the latter be outperformed by the former? This provoked a variety of explanations. Liu et al. thought that it might have been becau the Fisherface method us all the principal components, but the components with the small eigenvalue
s correspond to high-frequency components and usually encode noi [11], leading to recognition results that are less than ideal. In [5], Yu et al. propo a direct LDA (DLDA) approach to solve this problem. It removes the null space of the between-class scatter matrix firstly by doing eigen-analysis. Then a simultaneous diagonalization procedure is ud to ek the optimal discriminant vectors in the (1) This rearch was supported by the MIC (Ministry of Information and Communication), Korea, under the ITRC(Information Technology Rearch Center) support program supervid by the IITA (Institute of Information Technology Asssment)
Corresponding Authors : Vo Dinh Minh Nhat (vo_), and SungYoung Lee (sylee@oslab.khu.ac.kr)
2      Vo Dinh Minh Nhat and Sungyoung Lee
subspace of the between-class scatter matrix. However, in this method, removing the null space of the between-class scatter matrix by dimensionality reduction would indirectly lead to the losing of the null space of the within-class scatter matrix which contains considerable discriminative information. Rui Huang [10] propod the method in which the null space of total scatter matrix which has been proved to be the common null space of both between-class and within-class scatter matrix, and ul
ess for discrimination, is firstly removed. Then in the lower-dimensional projected space, the null space of the resulting within-class scatter matrix is calculated. This lower-dimensional null space, combined with the previous projection, reprents a subspace of the whole null space of within-class scatter matrix, and is really uful for discrimination. The optimal discriminant vectors of LDA are derived from it. In [14], a common vector for each individual class is obtained by removing all the features that are in the direction of the eigenvectors corresponding to the nonzero eigenvalues of the scatter matrix of its own class. The common vectors are then ud for recognition. In their ca, instead of using a given class’s own scatter matrix, they u the within-class scatter matrix of all class to obtain the common vectors.
In [15], a new PCA approach called Two-dimensional PCA (2DPCA), is developed for image feature extraction. As oppod to conventional PCA, 2DPCA is bad on 2D matrices rather than 1D vectors. That is, the image matrix does not need to be transformed into vector. Instead, an image covariance matrix can be constructed directly using original image matrices. So, in this paper we improve the LDA algorithm bad on the idea of two-dimensional PCA.
Generally, in this paper we improve the LDA-bad algorithm by apply the 2D approach into the Discriminative Common Vectors [14] algorithm. Our new method takes the advantages of both 2DPC
A method [15] for dealing with high dimensional data to avoid singularity and LDA-bad algorithm [14] for dealing with small sample size problem. The remainder of this paper is organized as follows: In Section 2, the traditional LDA methods are reviewed. The idea of the propod method and its algorithm are described in Section 3. In Section 4, experimental results are prented for the Yale face image databas to demonstrate the effectiveness of our method. Finally, conclusions are prented in Section 5.
2. Linear Discriminant Analysis
Suppo that we have N sample images 12{,,...,}N x x x  taking values in an n -dimensional image space. Let us also consider a linear transformation mapping the original n -dimensional image space into an m -dimensional feature space, where m < n . The new feature vectors m k
y ∈\ are defined by the following linear
hyper text markup languagetransformation : T k k y W x =
(1) where 1,2,...,k N = and nxm W ∈\ is a matrix with orthonormal columns.
Discriminative Common Images for Face Recognition      3
Different objective functions will yield different algorithms with different properties. While PCA eks directions that are efficient for reprentation, Linear Discriminant Analysis eks directions that are efficient for discrimination. Assume that each image belongs to one of C  class 12{,,...,}C C C C . Let i N  be the number of the samples in class (1,2,...,)i C i C =, 1
i i x C i x N µ∈=∑ be the mean of
the samples in class i X , 1
1N
i i x N µ==∑ be the mean of all samples. Then the between-class scatter matrix b S  is defined as
1
11()()C T T b i i i b b i S N N N µµµµ==−−=广州服装设计师
ΦΦ∑ (2) and the within-class scatter matrix w S  is defined as 111()()k i C T T w k i k i w w i x X S x
x N N µµ=∈=−−=ΦΦ∑∑ (3)
In LDA, the projection  opt W  is chon to maximize the ratio of the determinant of the between-class scatter matrix of the projected samples to the determinant of the within-class scatter matrix of the projected samples, i.e.,
12arg max [...]T b opt W m T w W S W W w w w W S W == (4) where {1,2,...,}i w i m = is the t of generalized eigenvectors of b S  and w S  corresponding to the m  largest generalized eigenvalues {1,2,...,}i i m λ=, i.e.,
1,2,...,b i i w i S w S w i m λ== (5)
3. Discriminative Common Vectors and Discriminative Common Images
The Discriminative Common Vectors approach can be summarized as follows, details can be referenced at [14] :英语六级分数线
• Step 1: Compute the nonzero eigenvalues and corresponding eigenvectors
of w S . Set 1[...]r Q αα=, where r is the rank of w S .
4      Vo Dinh Minh Nhat and Sungyoung Lee
• Step 2: Choo any sample from each class and project it onto the null
space of  w S  to
obtain the common vectors. _,,1,...,T c com i i
i c x x QQ x x C c C =−∈= (6) • Step 3: Compute the eigenvectors k w  of com S , corresponding to the
nonzero eigenvalues. With com S  is defined below. There are at most
1C − eigenvectors that correspond to the nonzero eigenvalues. U
the eigenvectors to form the projection matrix 121[...]C W w w w −=,
which will be ud to obtain feature vectors.
__1()()C T com c com com c com com c S x x µµ==−−∑
_11C
com c com c x C µ==∑ (7)
In Discriminative Common Images approach, the image matrix does not need to be previously transformed into a vector, so a t of N sample images is reprented as 12{,,...,}N X X X  with kxs i X ∈\. Then the between-class scatter matrix b S  is re-defined as
1
1()()i i c T b i
C X C X i S N N µµµµ==−−∑ (8) and the within-class scatter matrix w S  is re-defined as
11()()i i k i c T w k C k C i X C S X X N
µµ=∈=−−∑∑ (9)
with 1N kxslovely什么意思
X i i X µ==∈∑\ is the mean image of all samples and
1i i C X C i X N µ∈=∑ be the mean of the samples in class i
C . The common vectors in (8) now become common images, and defined as
_,,1,...,T c com i i
i c X X QQ X X C c C =−∈= (10)
Also, com S  in (9) is re-defined as
Discriminative Common Images for Face Recognition      5
__1()()C T com c com com c com com c S X X µµ==−−∑
_11C
com c com c X C µ==∑ (11)
4. Experimental results
This ction evaluates the performance of our propoped algorithm Discrinative Common Images (DCI) compared with that of the original Fisherface algorithm, Direct LDA algorithm, and Discriminative Common Vectors (DCV) bad on using Yale face databa. The databa contains 5760 single light source images of 10 subjects each en under 576 viewing conditions (9 pos x 6
4 illumination conditions).  For every subject in a particular po, an image with ambient (background) illumination was also captured.
Table 1. The recognition rates on Yale databas k Fisherface Direct LDA
DCV DCI 2 77.95 79.97
83.01 85.73 3 86.29 86.59
89.11 93.78 4 91.62 92.32
92.97 95.80 5 93.31 93.98
94.42 97.51 6 95.36 95.80 96.87 98.67
房租会计分录
Firstly, we tested the recognition rates with different number of training samples. (2,3,4,5,6)k k = images of each subject are randomly lected from the databa for training and the remaining  images of each subject for testing. For each value of k , 50 runs are performed with different random partition between training t and testing t, and Table 1 shows the average recognition rates (%) with Yale databa.
5. Conclusions
A new LDA-bad method for face recognition has been propod in this paper. In this paper, we propo a method that applies the idea of two-dimensional PCA. In addition to that, we propos a method called Discriminative Common Images bad on a variation of Fisher’s LDA for face recognition. By solving the small sample size problem and high dimensionality, this paper propos a practical algorithm for applying LDA on image recognition applications, and shows the efficiency in face recognition application. It has the advantage of easy training, efficient testing, and good performance compared to other linear classifiers.
6      Vo Dinh Minh Nhat and Sungyoung Lee
Reference
1.M. Turk, A Pentland: Eigenfaces for recognition. Journal of Cognitive Neuroscience, Vol.
3 (1991) 71-86.
2.W. Zhao, R. Chellappa, P.J. Phillips: Subspace Linear Discriminant Analysis for Face
开发英语Recognition. Technical Report CAR-TR-914, 1999.
3.  D. L. Swets, J. J. Weng: Using discrimination eigenfeatures for image retrieval. IEEE
Trans. Pattern Anal. Machine Intell., vol. 18 (1996) 831–836.
4.P. N. Belhumeur, J. P. Hespanha, D. J. Kriegman: Eigenfaces vs. fisherface: Recognition
using class specific linear projection. IEEE Trans. Pattern Anal. Machine Intell., Vol. 19 (1997) 711–720.
5.H. Yu, J. Yang: A direct LDA algorithm for high-dimensional data with application to
boldlyface recognition. Pattern Recognit., Vol. 34 (2001) 2067–2070.
6.M. Loog, R. P. W. Duin, R. Haeb-Umbach: Multiclass linear dimension reduction by
weighted pairwi fisher criteria. IEEE Trans. Pattern Anal. Machine Intell., Vol. 23 (2001) 762–766.
7.  A. M. Martinez, A. C. Kak: PCA versus LDA. IEEE Trans. Pattern Anal. Machine Intell.,
Vol. 23 (2001) 228–233.
8.  D. H. Foley, J. W. Sammon: An optimal t of discrimination vectors. IEEE Trans.
gradientComput., Vol. C-24 (1975) 281–289.
9.R. A. Fisher: The u of multiple measurements in taxonomic problems. Ann. Eugenics,
Vol. 7 (1936) 178–188.
10.Rui Huang, Qingshan Liu, Hanqing Lu, Songde Ma: Solving the small sample size
problem of LDA. Pattern Recognition, 2002. Proceedings. 16th International Conference on , Vol 3 (2002)
11.  C. Liu, H. Wechsler: Robust coding scheme for indexing and retrieval from large face
databas. IEEE Trans. Image Processing, Vol. 9 (2000) 132–137.
12.Chengjun Liu, Wechsler H.: A shape- and texture-bad enhanced Fisher classifier for
face recognition. IEEE Trans. Image Processing, Vol. 10(2001) 598–608.
13.L. Chen, H. M. Liao, M. Ko, J. Lin, G. Yu: A new LDA-bad face recognition system
which can solve the small sample size problem. Pattern Recognit., Vol. 33 (2000) 1713–1726.
14.Cevikalp H., Neamtu, M., Wilkes, M.; Barkana, A.: Discriminative common vectors for
face recognition. Pattern Analysis and Machine Intelligence, IEEE Transactions on , Vol.
27 (2005) 4 – 13
15.Jian Yang, Zhang, D., Frangi, A.F., Jing-yu Yang : Two-dimensional PCA: a new
アルバムapproach to appearance-bad face reprentation and recognition. Pattern Analysis and Machine Intelligence, IEEE Transactions on , Vol. 26 (2004) 131 – 137
16.“The Yale face databa” cvc.yale.edu/projects/yalefaces/yalefaces.html.

本文发布于:2023-07-05 00:00:43,感谢您对本站的认可!

本文链接:https://www.wtabcd.cn/fanwen/fan/78/1078832.html

版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。

标签:设计师   房租   服装   会计分录   广州
相关文章
留言与评论(共有 0 条评论)
   
验证码:
推荐文章
排行榜
Copyright ©2019-2022 Comsenz Inc.Powered by © 专利检索| 网站地图