edge
Classification edge for Gaussian kernel classification model
年代yntax
Description
returns the classification edge for the trained kernel classifiere
= edge(Mdl
,Tbl
,ResponseVarName
)Mdl
using the predictor data in tableTbl
and the class labels inTbl.ResponseVarName
.
returns the weighted classification edge using the observation weights supplied ine
= edge(___,“重量”
,weights
)weights
. Specify the weights after any of the input argument combinations in previous syntaxes.
Note
If the predictor dataX
or the predictor variables inTbl
contain any missing values, theedge
function can return NaN. For more details, seeedge can return NaN for predictor data with missing values.
Examples
Estimate Test-Set Edge
Load theionosphere
data set. This data set has 34 predictors and 351 binary responses for radar returns, either bad ('b'
) or good ('g'
).
loadionosphere
Partition the data set into training and test sets. Specify a 15% holdout sample for the test set.
rng('default')% For reproducibility分区= cvpartition (Y,'Holdout',0.15); trainingInds = training(Partition);% Indices for the training settestInds = test(Partition);% Indices for the test set
Train a binary kernel classification model using the training set.
Mdl = fitckernel(X(trainingInds,:),Y(trainingInds));
Estimate the training-set edge and the test-set edge.
eTrain = edge(Mdl,X(trainingInds,:),Y(trainingInds))
eTrain = 2.1703
eTest = edge(Mdl,X(testInds,:),Y(testInds))
eTest = 1.5643
Feature Selection Using Test-Set Edges
Perform feature selection by comparing test-set edges from multiple models. Based solely on this criterion, the classifier with the highest edge is the best classifier.
Load theionosphere
data set. This data set has 34 predictors and 351 binary responses for radar returns, either bad ('b'
) or good ('g'
).
loadionosphere
Partition the data set into training and test sets. Specify a 15% holdout sample for the test set.
rng('default')% For reproducibility分区= cvpartition (Y,'Holdout',0.15); trainingInds = training(Partition);% Indices for the training setXTrain = X(trainingInds,:); YTrain = Y(trainingInds); testInds = test(Partition);% Indices for the test setXTest = X(testInds,:); YTest = Y(testInds);
Randomly choose half of the predictor variables.
p = size(X,2);% Number of predictorsidxPart = randsample(p,ceil(0.5*p));
Train two binary kernel classification models: one that uses all of the predictors, and one that uses half of the predictors.
Mdl = fitckernel(XTrain,YTrain); PMdl = fitckernel(XTrain(:,idxPart),YTrain);
Mdl
andPMdl
areClassificationKernel
models.
Estimate the test-set edge for each classifier.
fullEdge = edge(Mdl,XTest,YTest)
fullEdge = 1.6335
partEdge = edge(PMdl,XTest(:,idxPart),YTest)
partEdge = 2.0205
Based on the test-set edges, the classifier that uses half of the predictors is the better model.
Input Arguments
Mdl
—Binary kernel classification model
ClassificationKernel
model object
Binary kernel classification model, specified as aClassificationKernel
model object. You can create aClassificationKernel
model object usingfitckernel
.
Y
—Class labels
categorical array|character array|string array|logical vector|numeric vector|cell array of character vectors
Class labels, specified as a categorical, character, or string array; logical or numeric vector; or cell array of character vectors.
The data type of
Y
must be the same as the data type ofMdl.ClassNames
.(The software treats string arrays as cell arrays of character vectors.)The distinct classes in
Y
must be a subset ofMdl.ClassNames
.If
Y
is a character array, then each element must correspond to one row of the array.The length of
Y
must be equal to the number of observations inX
orTbl
.
Data Types:categorical
|char
|string
|logical
|single
|double
|cell
Tbl
—年代ample data
table
年代ample data used to train the model, specified as a table. Each row ofTbl
corresponds to one observation, and each column corresponds to one predictor variable. Optionally,Tbl
can contain additional columns for the response variable and observation weights.Tbl
must contain all the predictors used to trainMdl
. Multicolumn variables and cell arrays other than cell arrays of character vectors are not allowed.
IfTbl
contains the response variable used to trainMdl
, then you do not need to specifyResponseVarName
orY
.
If you trainMdl
using sample data contained in a table, then the input data foredge
must also be in a table.
ResponseVarName
—Response variable name
name of variable inTbl
Response variable name, specified as the name of a variable inTbl
. IfTbl
contains the response variable used to trainMdl
, then you do not need to specifyResponseVarName
.
If you specifyResponseVarName
, then you must specify it as a character vector or string scalar. For example, if the response variable is stored asTbl.Y
, then specifyResponseVarName
as'Y'
. Otherwise, the software treats all columns ofTbl
, includingTbl.Y
, as predictors.
The response variable must be a categorical, character, or string array; a logical or numeric vector; or a cell array of character vectors. If the response variable is a character array, then each element must correspond to one row of the array.
Data Types:char
|string
weights
—Observation weights
ones(size(X,1),1)
(default) |numeric vector|name of variable inTbl
Observation weights, specified as a numeric vector or the name of a variable inTbl
.
If
weights
is a numeric vector, then the size ofweights
必须等于中的行数X
orTbl
.If
weights
is the name of a variable inTbl
, you must specifyweights
as a character vector or string scalar. For example, if the weights are stored asTbl.W
, then specifyweights
as'W'
. Otherwise, the software treats all columns ofTbl
, includingTbl.W
, as predictors.
If you supply weights,edge
computes the weightedclassification edge. The software weights the observations in each row ofX
orTbl
with the corresponding weights inweights
.
edge
normalizesweights
to sum up to the value of the prior probability in the respective class.
Data Types:single
|double
|char
|string
Output Arguments
e
— Classification edge
numeric scalar
Classification edge, returned as a numeric scalar.
More About
Classification Edge
Theclassification edgeis the weighted mean of the classification margins.
One way to choose among multiple classifiers, for example to perform feature selection, is to choose the classifier that yields the greatest edge.
Classification Margin
Theclassification marginfor binary classification is, for each observation, the difference between the classification score for the true class and the classification score for the false class.
The software defines the classification margin for binary classification as
xis an observation. If the true label ofxis the positive class, thenyis 1, and –1 otherwise.f(x) is the positive-class classification score for the observationx. The classification margin is commonly defined asm=yf(x).
If the margins are on the same scale, then they serve as a classification confidence measure. Among multiple classifiers, those that yield greater margins are better.
Classification Score
For kernel classification models, the rawclassification scorefor classifying the observationx, a row vector, into the positive class is defined by
is a transformation of an observation for feature expansion.
βis the estimated column vector of coefficients.
bis the estimated scalar bias.
The raw classification score for classifyingxinto the negative class is−f(x). The software classifies observations into the class that yields a positive score.
If the kernel classification model consists of logistic regression learners, then the software applies the'logit'
score transformation to the raw classification scores (see年代coreTransform
).
Extended Capabilities
Tall Arrays
Calculate with arrays that have more rows than fit in memory.
Usage notes and limitations:
edge
does not support talltable
data.
For more information, seeTall Arrays.
Version History
Introduced in R2017bR2022a:edge
returns a different value for a model with a nondefault cost matrix
If you specify a nondefault cost matrix when you train the input model object, theedge
function returns a different value compared to previous releases.
Theedge
function uses the prior probabilities stored in thePrior
property to normalize the observation weights of the input data. The way the function uses thePrior
property value has not changed. However, the property value stored in the input model object has changed for a model with a nondefault cost matrix, so the function can return a different value.
For details about the property value change, seeCost property stores the user-specified cost matrix.
If you want the software to handle the cost matrix, prior probabilities, and observation weights as in previous releases, adjust the prior probabilities and observation weights for the nondefault cost matrix, as described inAdjust Prior Probabilities and Observation Weights for Misclassification Cost Matrix. Then, when you train a classification model, specify the adjusted prior probabilities and observation weights by using thePrior
andWeights
name-value arguments, respectively, and use the default cost matrix.
R2022a:edge
can return NaN for predictor data with missing values
Theedge
function no longer omits an observation with a NaN score when computing the weighted mean of the classification margins. Therefore,edge
can now return NaN when the predictor dataX
or the predictor variables inTbl
contain any missing values. In most cases, if the test set observations do not contain missing predictors, theedge
function does not return NaN.
This change improves the automatic selection of a classification model when you usefitcauto
. Before this change, the software might select a model (expected to best classify new data) with few non-NaN predictors.
Ifedge
in your code returns NaN, you can update your code to avoid this result. Remove or replace the missing values by usingrmmissing
orfillmissing
, respectively.
The following table shows the classification models for which theedge
object function might return NaN. For more details, see the Compatibility Considerations for eachedge
function.
Model Type | Full or Compact Model Object | edge Object Function |
---|---|---|
判别分析分类模el | ClassificationDiscriminant ,CompactClassificationDiscriminant |
edge |
Ensemble of learners for classification | ClassificationEnsemble ,CompactClassificationEnsemble |
edge |
Gaussian kernel classification model | ClassificationKernel |
edge |
k-nearest neighbor classification model | ClassificationKNN |
edge |
Linear classification model | ClassificationLinear |
edge |
Neural network classification model | ClassificationNeuralNetwork ,CompactClassificationNeuralNetwork |
edge |
年代upport vector machine (SVM) classification model | edge |
年代ee Also
Open Example
You have a modified version of this example. Do you want to open this example with your edits?
MATLAB Command
You clicked a link that corresponds to this MATLAB command:
Run the command by entering it in the MATLAB Command Window. Web browsers do not support MATLAB commands.
年代elect a Web Site
Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select:.
You can also select a web site from the following list:
How to Get Best Site Performance
年代elect the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.
Americas
- América Latina(Español)
- Canada(英语)
- United States(英语)
Europe
- Belgium(英语)
- Denmark(英语)
- Deutschland(Deutsch)
- España(Español)
- Finland(英语)
- France(Français)
- Ireland(英语)
- Italia(Italiano)
- Luxembourg(英语)
- Netherlands(英语)
- Norway(英语)
- Österreich(Deutsch)
- Portugal(英语)
- 年代weden(英语)
- 年代witzerland
- United Kingdom(英语)