Main Content

edge

Classification edge for Gaussian kernel classification model

Description

example

e= edge(Mdl,X,Y)returns theclassification edgefor the binary Gaussian kernel classification modelMdlusing the predictor data inXand the corresponding class labels inY.

e= edge(Mdl,Tbl,ResponseVarName)returns the classification edge for the trained kernel classifierMdlusing the predictor data in tableTbland the class labels inTbl.ResponseVarName.

e= edge(Mdl,Tbl,Y)returns the classification edge for the classifierMdlusing the predictor data in tableTbland the class labels in vectorY.

e= edge(___,“重量”,weights)returns the weighted classification edge using the observation weights supplied inweights. Specify the weights after any of the input argument combinations in previous syntaxes.

Note

If the predictor dataXor the predictor variables inTblcontain any missing values, theedgefunction can return NaN. For more details, seeedge can return NaN for predictor data with missing values.

Examples

collapse all

Load theionospheredata set. This data set has 34 predictors and 351 binary responses for radar returns, either bad ('b') or good ('g').

loadionosphere

Partition the data set into training and test sets. Specify a 15% holdout sample for the test set.

rng('default')% For reproducibility分区= cvpartition (Y,'Holdout',0.15); trainingInds = training(Partition);% Indices for the training settestInds = test(Partition);% Indices for the test set

Train a binary kernel classification model using the training set.

Mdl = fitckernel(X(trainingInds,:),Y(trainingInds));

Estimate the training-set edge and the test-set edge.

eTrain = edge(Mdl,X(trainingInds,:),Y(trainingInds))
eTrain = 2.1703
eTest = edge(Mdl,X(testInds,:),Y(testInds))
eTest = 1.5643

Perform feature selection by comparing test-set edges from multiple models. Based solely on this criterion, the classifier with the highest edge is the best classifier.

Load theionospheredata set. This data set has 34 predictors and 351 binary responses for radar returns, either bad ('b') or good ('g').

loadionosphere

Partition the data set into training and test sets. Specify a 15% holdout sample for the test set.

rng('default')% For reproducibility分区= cvpartition (Y,'Holdout',0.15); trainingInds = training(Partition);% Indices for the training setXTrain = X(trainingInds,:); YTrain = Y(trainingInds); testInds = test(Partition);% Indices for the test setXTest = X(testInds,:); YTest = Y(testInds);

Randomly choose half of the predictor variables.

p = size(X,2);% Number of predictorsidxPart = randsample(p,ceil(0.5*p));

Train two binary kernel classification models: one that uses all of the predictors, and one that uses half of the predictors.

Mdl = fitckernel(XTrain,YTrain); PMdl = fitckernel(XTrain(:,idxPart),YTrain);

MdlandPMdlareClassificationKernelmodels.

Estimate the test-set edge for each classifier.

fullEdge = edge(Mdl,XTest,YTest)
fullEdge = 1.6335
partEdge = edge(PMdl,XTest(:,idxPart),YTest)
partEdge = 2.0205

Based on the test-set edges, the classifier that uses half of the predictors is the better model.

Input Arguments

collapse all

Binary kernel classification model, specified as aClassificationKernelmodel object. You can create aClassificationKernelmodel object usingfitckernel.

Predictor data, specified as ann-by-pnumeric matrix, wherenis the number of observations andpis the number of predictors used to trainMdl.

The length ofYand the number of observations inXmust be equal.

Data Types:single|double

Class labels, specified as a categorical, character, or string array; logical or numeric vector; or cell array of character vectors.

  • The data type ofYmust be the same as the data type ofMdl.ClassNames.(The software treats string arrays as cell arrays of character vectors.)

  • The distinct classes inYmust be a subset ofMdl.ClassNames.

  • IfYis a character array, then each element must correspond to one row of the array.

  • The length ofYmust be equal to the number of observations inXorTbl.

Data Types:categorical|char|string|logical|single|double|cell

年代ample data used to train the model, specified as a table. Each row ofTblcorresponds to one observation, and each column corresponds to one predictor variable. Optionally,Tblcan contain additional columns for the response variable and observation weights.Tblmust contain all the predictors used to trainMdl. Multicolumn variables and cell arrays other than cell arrays of character vectors are not allowed.

IfTblcontains the response variable used to trainMdl, then you do not need to specifyResponseVarNameorY.

If you trainMdlusing sample data contained in a table, then the input data foredgemust also be in a table.

Response variable name, specified as the name of a variable inTbl. IfTblcontains the response variable used to trainMdl, then you do not need to specifyResponseVarName.

If you specifyResponseVarName, then you must specify it as a character vector or string scalar. For example, if the response variable is stored asTbl.Y, then specifyResponseVarNameas'Y'. Otherwise, the software treats all columns ofTbl, includingTbl.Y, as predictors.

The response variable must be a categorical, character, or string array; a logical or numeric vector; or a cell array of character vectors. If the response variable is a character array, then each element must correspond to one row of the array.

Data Types:char|string

Observation weights, specified as a numeric vector or the name of a variable inTbl.

  • Ifweightsis a numeric vector, then the size ofweights必须等于中的行数XorTbl.

  • Ifweightsis the name of a variable inTbl, you must specifyweightsas a character vector or string scalar. For example, if the weights are stored asTbl.W, then specifyweightsas'W'. Otherwise, the software treats all columns ofTbl, includingTbl.W, as predictors.

If you supply weights,edgecomputes the weightedclassification edge. The software weights the observations in each row ofXorTblwith the corresponding weights inweights.

edgenormalizesweightsto sum up to the value of the prior probability in the respective class.

Data Types:single|double|char|string

Output Arguments

collapse all

Classification edge, returned as a numeric scalar.

More About

collapse all

Classification Edge

Theclassification edgeis the weighted mean of the classification margins.

One way to choose among multiple classifiers, for example to perform feature selection, is to choose the classifier that yields the greatest edge.

Classification Margin

Theclassification marginfor binary classification is, for each observation, the difference between the classification score for the true class and the classification score for the false class.

The software defines the classification margin for binary classification as

m = 2 y f ( x ) .

xis an observation. If the true label ofxis the positive class, thenyis 1, and –1 otherwise.f(x) is the positive-class classification score for the observationx. The classification margin is commonly defined asm=yf(x).

If the margins are on the same scale, then they serve as a classification confidence measure. Among multiple classifiers, those that yield greater margins are better.

Classification Score

For kernel classification models, the rawclassification scorefor classifying the observationx, a row vector, into the positive class is defined by

f ( x ) = T ( x ) β + b .

  • T ( · ) is a transformation of an observation for feature expansion.

  • βis the estimated column vector of coefficients.

  • bis the estimated scalar bias.

The raw classification score for classifyingxinto the negative class isf(x). The software classifies observations into the class that yields a positive score.

If the kernel classification model consists of logistic regression learners, then the software applies the'logit'score transformation to the raw classification scores (see年代coreTransform).

Extended Capabilities

Version History

Introduced in R2017b

expand all