Transformation matrix T Input data

decode denso qr bar code for .netUsing Barcode Control SDK for visual .net Control to generate, create, read, scan barcode image in visual .net applications.

Class models Recognized classes

Qr Barcode printing for .netusing barcode writer for .net control to generate, create qr code image in .net applications.

x Parameter extractor Feature extractor and classifier Classification

Qr Barcode reader on .netUsing Barcode recognizer for Visual Studio .NET Control to read, scan read, scan image in Visual Studio .NET applications.

Figure 16.3 An integrated feature extraction and classification system.

Bar Code barcode library on .netgenerate, create barcode none on .net projects

Feature Extraction and Compression with Classifiers

Barcode writer on .netgenerate, create barcode none in .net projects

the higher dimensional feature space. Because of this, SVM has the advantage that it can handle the classes with complex nonlinear decision boundaries. SVM has now evolved into an active area of research [18 21]. This chapter will first introduce the major feature extraction methods LDA and PCA. The MCE algorithm for integrated feature extraction and classification and the nonlinear formulation of SVM are then introduced. Feature extraction and compression with MCE and SVM are discussed subsequently. The performances of these feature extraction and classification algorithms are compared and discussed based on the experimental results on Deterding vowels and TIMIT continuous speech databases.

2. Standard Feature Extraction Methods 2.1 Linear Discriminant Analysis

QR Code barcode library on .netusing barcode printing for aspx.net control to generate, create qr-code image in aspx.net applications.

The goal of linear discriminant analysis is to separate the classes by projecting class samples from p-dimensional space onto a finely orientated line. For a K-class problem, m = min K 1 p different lines will be involved. Thus, the projection is from a p-dimensional space to a c-dimensional space [22]. Suppose we have K classes, X1 X2 XK . Let the ith observation vector from the Xj be xji , where j=1 J and i = 1 Nj . J is the number of classes and Nj is the number of observations from class j. The within-class covariance matrix Sw and between-class covariance matrix Sb are defined as:

Visual .net qr-codes generationfor vbgenerate, create denso qr bar code none in vb.net projects

Sw =

Visual .net upca generatingfor .netgenerate, create upc-a supplement 2 none on .net projects

j=1 K

Bar Code generating for .netuse .net vs 2010 crystal barcode generator toassign bar code in .net

Sj =

1 Nj j=1

Include european article number 8 on .netusing barcode maker for .net crystal control to generate, create ean 8 image in .net crystal applications.

xji

2D Barcode encoding on .netgenerate, create 2d matrix barcode none with .net projects

xji

Control qr bidimensional barcode size in excelto incoporate qr-codes and qr-codes data, size, image with microsoft excel barcode sdk

(16.1)

Control ecc200 size in vb.netto embed data matrix and datamatrix data, size, image with vb.net barcode sdk

Sb =

scan code 128 code set c with .netUsing Barcode recognizer for .net framework Control to read, scan read, scan image in .net framework applications.

1 N N

where

Incoporate code128 in vbusing asp.net web pages crystal tocreate uss code 128 on asp.net web,windows application

1 Nj

UCC - 12 generation in .netgenerate, create upca none on .net projects

Nj i=1

xji is the mean of class j and

The projection from observation space to feature space is accomplished by a linear transformation matrix T: y = TT x The corresponding within-class and between-class covariance matrices in the feature space are: Sw = Sb = where j =

1 Nj Nj i=1 Nj

xi is the global mean.

(16.2)

1 Nj j=1

yji j yji j

(16.3)

Nj j

j

yji and =

N i=1

yi . It is straightforward to show that: Sw = TT Sw T Sb = TT Sb T (16.4)

A linear discriminant is then defined as the linear functions for which the objective function JT = Sb TT SB T = T T SW T Sw (16.5)

is maximal. It can be shown that the solution of Equation (16.5) is that the ith column of an optimal T is the generalized eigenvector corresponding to the ith largest eigenvalue of matrix S 1 Sb [6]. w

The MCE Training Algorithm

2.2 Principal Component Analysis

PCA is a well-established technique for feature extraction and dimensionality reduction [2,23]. It is based on the assumption that most information about classes is contained in the directions along which the variations are the largest. The most common derivation of PCA is in terms of a standardized linear projection which maximizes the variance in the projected space [1]. For a given p-dimensional data Tm , where l m p, are orthonormal axes onto which the set X, the m principal axes T1 T2 Tm , can be given by the retained variance is maximum in the projected space. Generally, T1 T2 1 N x xi T , where xi X m leading eigenvectors of the sample covariance matrix S = N i=1 i is the sample mean and N is the number of samples, so that: STi =

i Ti

i 1 m

(16.6)

where i is the ith largest eigenvalue of S. The m principal components of a given observation vector x X are given by:

T T y = y1 ym = T1 x Tm x = TT x

(16.7)

The m principal components of x are decorrelated in the projected space [2]. In multiclass problems, the variations of data are determined on a global basis, that is, the principal axes are derived from a global covariance matrix: 1 S= N

K j=1 K Nj

xji xji

j=1 i=1

(16.8)

where is the global mean of all the samples, K is the number of classes, Nj is the number of samples in class j, N = T1 T2 Nj and xji represents the ith observation from class j. The principal axes Tm are therefore the m leading eigenvectors of S: STi = i Ti i 1 m (16.9)

where i is the ith largest eigenvalue of S. An assumption made for feature extraction and dimensionality reduction by PCA is that most information of the observation vectors is contained in the subspace spanned by the first m principal axes, where m < p. Therefore, each original data vector can be represented by its principal component vector with dimensionality m.