Fit Gaussian kernel regression model using random feature expansion
fitrkernel
trains or cross-validates a Gaussian kernel regression model for nonlinear regression.fitrkernel
更实际的使用大数据应用程序that have large training sets, but can also be applied to smaller data sets that fit in memory.
fitrkernel
maps data in a low-dimensional space into a high-dimensional space, then fits a linear model in the high-dimensional space by minimizing the regularized objective function. Obtaining the linear model in the high-dimensional space is equivalent to applying the Gaussian kernel to the model in the low-dimensional space. Available linear regression models include regularized support vector machine (SVM) and least-squares regression models.
To train a nonlinear SVM regression model on in-memory data, seefitrsvm
.
returns a kernel regression modelMdl
= fitrkernel(Tbl
,ResponseVarName
)Mdl
trained using the predictor variables contained in the tableTbl
and the response values inTbl.ResponseVarName
.
specifies options using one or more name-value pair arguments in addition to any of the input argument combinations in previous syntaxes. For example, you can implement least-squares regression, specify the number of dimension of the expanded space, or specify cross-validation options.Mdl
= fitrkernel(___,Name,Value
)
[
also returns the hyperparameter optimization results when you optimize hyperparameters by using theMdl
,FitInfo
,HyperparameterOptimizationResults
] = fitrkernel(___)'OptimizeHyperparameters'
name-value pair argument.
Train a kernel regression model for a tall array by using SVM.
When you perform calculations on tall arrays, MATLAB® uses either a parallel pool (default if you have Parallel Computing Toolbox™) or the local MATLAB session. To run the example using the local MATLAB session when you have Parallel Computing Toolbox, change the global execution environment by using themapreducer
function.
mapreducer(0)
Create a datastore that references the folder location with the data. The data can be contained in a single file, a collection of files, or an entire folder. Treat'NA'
values as missing data so thatdatastore
replaces them withNaN
values. Select a subset of the variables to use. Create a tall table on top of the datastore.
varnames = {'ArrTime','DepTime','ActualElapsedTime'}; ds = datastore('airlinesmall.csv','TreatAsMissing','NA',...'SelectedVariableNames',varnames); t = tall(ds);
SpecifyDepTime
andArrTime
as the predictor variables (X
)andActualElapsedTime
as the response variable (Y
). Select the observations for whichArrTime
是晚于DepTime
.
daytime = t.ArrTime>t.DepTime; Y = t.ActualElapsedTime(daytime);% Response dataX = t{daytime,{'DepTime''ArrTime'}};% Predictor data
Standardize the predictor variables.
Z = zscore(X);% Standardize the data
Train a default Gaussian kernel regression model with the standardized predictors. Extract a fit summary to determine how well the optimization algorithm fits the model to the data.
[Mdl,FitInfo] = fitrkernel(Z,Y)
Found 6 chunks. |========================================================================= | Solver | Iteration / | Objective | Gradient | Beta relative | | | Data Pass | | magnitude | change | |========================================================================= | INIT | 0 / 1 | 4.307833e+01 | 4.345788e-02 | NaN | | LBFGS | 0 / 2 | 3.705713e+01 | 1.577301e-02 | 9.988252e-01 | | LBFGS | 1 / 3 | 3.704022e+01 | 3.082836e-02 | 1.338410e-03 | | LBFGS | 2 / 4 | 3.701398e+01 | 3.006488e-02 | 1.116070e-03 | | LBFGS | 2 / 5 | 3.698797e+01 | 2.870642e-02 | 2.234599e-03 | | LBFGS | 2 / 6 | 3.693687e+01 | 2.625581e-02 | 4.479069e-03 | | LBFGS | 2 / 7 | 3.683757e+01 | 2.239620e-02 | 8.997877e-03 | | LBFGS | 2 / 8 | 3.665038e+01 | 1.782358e-02 | 1.815682e-02 | | LBFGS | 3 / 9 | 3.473411e+01 | 4.074480e-02 | 1.778166e-01 | | LBFGS | 4 / 10 | 3.684246e+01 | 1.608942e-01 | 3.294968e-01 | | LBFGS | 4 / 11 | 3.441595e+01 | 8.587703e-02 | 1.420892e-01 | | LBFGS | 5 / 12 | 3.377755e+01 | 3.760006e-02 | 4.640134e-02 | | LBFGS | 6 / 13 | 3.357732e+01 | 1.912644e-02 | 3.842057e-02 | | LBFGS | 7 / 14 | 3.334081e+01 | 3.046709e-02 | 6.211243e-02 | | LBFGS | 8 / 15 | 3.309239e+01 | 3.858085e-02 | 6.411356e-02 | | LBFGS | 9 / 16 | 3.276577e+01 | 3.612292e-02 | 6.938579e-02 | | LBFGS | 10 / 17 | 3.234029e+01 | 2.734959e-02 | 1.144307e-01 | | LBFGS | 11 / 18 | 3.205763e+01 | 2.545990e-02 | 7.323180e-02 | | LBFGS | 12 / 19 | 3.183341e+01 | 2.472411e-02 | 3.689625e-02 | | LBFGS | 13 / 20 | 3.169307e+01 | 2.064613e-02 | 2.998555e-02 | |========================================================================= | Solver | Iteration / | Objective | Gradient | Beta relative | | | Data Pass | | magnitude | change | |========================================================================= | LBFGS | 14 / 21 | 3.146896e+01 | 1.788395e-02 | 5.967293e-02 | | LBFGS | 15 / 22 | 3.118171e+01 | 1.660696e-02 | 1.124062e-01 | | LBFGS | 16 / 23 | 3.106224e+01 | 1.506147e-02 | 7.947037e-02 | | LBFGS | 17 / 24 | 3.098395e+01 | 1.564561e-02 | 2.678370e-02 | | LBFGS | 18 / 25 | 3.096029e+01 | 4.464104e-02 | 4.547148e-02 | | LBFGS | 19 / 26 | 3.085475e+01 | 1.442800e-02 | 1.677268e-02 | | LBFGS | 20 / 27 | 3.078140e+01 | 1.906548e-02 | 2.275185e-02 | |========================================================================|
Mdl = RegressionKernel PredictorNames: {'x1' 'x2'} ResponseName: 'Y' Learner: 'svm' NumExpansionDimensions: 64 KernelScale: 1 Lambda: 8.5385e-06 BoxConstraint: 1 Epsilon: 5.9303 Properties, Methods
FitInfo =struct with fields:Solver: 'LBFGS-tall' LossFunction: 'epsiloninsensitive' Lambda: 8.5385e-06 BetaTolerance: 1.0000e-03 GradientTolerance: 1.0000e-05 ObjectiveValue: 30.7814 GradientMagnitude: 0.0191 RelativeChangeInBeta: 0.0228 FitTime: 50.0477 History: [1x1 struct]
Mdl
is aRegressionKernel
model. To inspect the regression error, you can passMdl
and the training data or new data to theloss
function. Or, you can passMdl
and new predictor data to thepredict
function to predict responses for new observations. You can also passMdl
and the training data to theresume
function to continue training.
FitInfo
is a structure array containing optimization information. UseFitInfo
to determine whether optimization termination measurements are satisfactory.
For improved accuracy, you can increase the maximum number of optimization iterations ('IterationLimit'
)and decrease the tolerance values ('BetaTolerance'
and'GradientTolerance'
)by using the name-value pair arguments offitrkernel
. Doing so can improve measures likeObjectiveValue
andRelativeChangeInBeta
inFitInfo
. You can also optimize model parameters by using the'OptimizeHyperparameters'
name-value pair argument.
Load thecarbig
data set.
loadcarbig
Specify the predictor variables (X
)and the response variable (Y
).
X = [Acceleration,Cylinders,Displacement,Horsepower,Weight]; Y = MPG;
Delete rows ofX
andY
where either array hasNaN
values. Removing rows withNaN
values before passing data tofitrkernel
can speed up training and reduce memory usage.
R = rmmissing([X Y]);% Data with missing entries removedX = R(:,1:5); Y = R(:,end);
Standardize the predictor variables.
Z = zscore(X);
Cross-validate a kernel regression model using 5-fold cross-validation.
Mdl = fitrkernel(Z,Y,'Kfold',5)
Mdl = RegressionPartitionedKernel CrossValidatedModel: 'Kernel' ResponseName: 'Y' NumObservations: 392 KFold: 5 Partition: [1x1 cvpartition] ResponseTransform: 'none' Properties, Methods
numel(Mdl.Trained)
ans = 5
Mdl
is aRegressionPartitionedKernel
model. Becausefitrkernel
implements five-fold cross-validation,Mdl
contains fiveRegressionKernel
models that the software trains on training-fold (in-fold) observations.
Examine the cross-validation loss (mean squared error) for each fold.
kfoldLoss(Mdl,'mode','individual')
ans =5×113.0610 14.0975 24.0104 21.1223 24.3979
Optimize hyperparameters automatically using the'OptimizeHyperparameters'
name-value pair argument.
Load thecarbig
data set.
loadcarbig
Specify the predictor variables (X
)and the response variable (Y
).
X = [Acceleration,Cylinders,Displacement,Horsepower,Weight]; Y = MPG;
Delete rows ofX
andY
where either array hasNaN
values. Removing rows withNaN
values before passing data tofitrkernel
can speed up training and reduce memory usage.
R = rmmissing([X Y]);% Data with missing entries removedX = R(:,1:5); Y = R(:,end);
Standardize the predictor variables.
Z = zscore(X);
Find hyperparameters that minimize five-fold cross-validation loss by using automatic hyperparameter optimization. Specify'OptimizeHyperparameters'
as'auto'
so thatfitrkernel
finds the optimal values of the'KernelScale'
,'Lambda'
, and'Epsilon'
name-value pair arguments. For reproducibility, set the random seed and use the'expected-improvement-plus'
acquisition function.
rng('default')[Mdl,FitInfo,HyperparameterOptimizationResults] = fitrkernel(Z,Y,'OptimizeHyperparameters','auto',...'HyperparameterOptimizationOptions',struct('AcquisitionFunctionName','expected-improvement-plus'))
|====================================================================================================================| | Iter | Eval | Objective: | Objective | BestSoFar | BestSoFar | KernelScale | Lambda | Epsilon | | | result | log(1+loss) | runtime | (observed) | (estim.) | | | | |====================================================================================================================| | 1 | Best | 4.8295 | 1.0202 | 4.8295 | 4.8295 | 0.011518 | 6.8068e-05 | 0.95918 | | 2 | Best | 4.1488 | 0.20075 | 4.1488 | 4.1855 | 477.57 | 0.066115 | 0.091828 | | 3 | Accept | 4.1521 | 0.23448 | 4.1488 | 4.1747 | 0.0080478 | 0.0052867 | 520.84 | | 4 | Accept | 4.1506 | 0.19343 | 4.1488 | 4.1488 | 0.10935 | 0.35931 | 0.013372 | | 5 | Best | 4.1446 | 0.22183 | 4.1446 | 4.1446 | 326.29 | 2.5457 | 0.22475 | | 6 | Accept | 4.1521 | 0.20642 | 4.1446 | 4.1447 | 932.16 | 0.19667 | 873.68 | | 7 | Accept | 4.1501 | 0.44117 | 4.1446 | 4.1461 | 0.052426 | 2.5402 | 0.051319 | | 8 | Best | 4.1408 | 0.2327 | 4.1408 | 4.1402 | 850.91 | 0.01462 | 0.37284 | | 9 | Accept | 4.1521 | 0.35173 | 4.1408 | 4.1427 | 0.019352 | 0.012035 | 63.493 | | 10 | Accept | 4.1521 | 0.21123 | 4.1408 | 4.1452 | 853.22 | 1.0698 | 55.679 | | 11 | Accept | 4.1521 | 0.34269 | 4.1408 | 4.1416 | 1.4548 | 0.022234 | 26.275 | | 12 | Accept | 4.1509 | 0.19879 | 4.1408 | 4.1469 | 877.82 | 0.0071133 | 0.012021 | | 13 | Accept | 4.1422 | 0.85467 | 4.1408 | 4.1455 | 944.08 | 0.011177 | 0.31055 | | 14 | Accept | 4.2032 | 0.29126 | 4.1408 | 4.1405 | 979.21 | 0.010842 | 13.776 | | 15 | Accept | 4.1438 | 0.20706 | 4.1408 | 4.1509 | 0.001234 | 0.018449 | 0.044225 | | 16 | Best | 4.1372 | 0.17378 | 4.1372 | 4.1511 | 1.7802 | 2.5477 | 0.014737 | | 17 | Accept | 4.1521 | 0.24019 | 4.1372 | 4.1466 | 0.0015946 | 2.5474 | 590.35 | | 18 | Accept | 4.1452 | 0.20963 | 4.1372 | 4.1464 | 0.058846 | 1.0766 | 0.20569 | | 19 | Accept | 4.1521 | 0.32159 | 4.1372 | 4.1461 | 2.187 | 2.5531e-06 | 278.92 | | 20 | Accept | 4.1451 | 0.32948 | 4.1372 | 4.1461 | 0.0050283 | 0.039894 | 0.14402 | |====================================================================================================================| | Iter | Eval | Objective: | Objective | BestSoFar | BestSoFar | KernelScale | Lambda | Epsilon | | | result | log(1+loss) | runtime | (observed) | (estim.) | | | | |====================================================================================================================| | 21 | Best | 4.1362 | 0.26969 | 4.1362 | 4.1426 | 0.0029885 | 0.039099 | 6.3938 | | 22 | Accept | 4.1521 | 0.20719 | 4.1362 | 4.1449 | 0.035949 | 0.038533 | 80.585 | | 23 | Accept | 4.1399 | 0.36116 | 4.1362 | 4.1446 | 50.001 | 0.095432 | 0.19954 | | 24 | Accept | 4.1487 | 0.52381 | 4.1362 | 4.1374 | 0.012199 | 0.089894 | 0.034773 | | 25 | Accept | 4.1521 | 0.22703 | 4.1362 | 4.1447 | 0.0011871 | 0.30153 | 425.89 | | 26 | Accept | 4.1466 | 0.45165 | 4.1362 | 4.145 | 0.0011773 | 0.052213 | 0.017592 | | 27 | Accept | 4.1418 | 0.17754 | 4.1362 | 4.145 | 7.556 | 1.655 | 0.016225 | | 28 | Accept | 4.1407 | 0.4172 | 4.1362 | 4.145 | 0.01201 | 1.6696 | 0.38806 | | 29 | Accept | 5.4153 | 3.9701 | 4.1362 | 4.1365 | 0.0010531 | 1.1032e-05 | 0.034083 | | 30 | Accept | 4.1521 | 0.34684 | 4.1362 | 4.1364 | 652.19 | 2.6286e-06 | 882.02 |
__________________________________________________________ Optimization completed. MaxObjectiveEvaluations of 30 reached. Total function evaluations: 30 Total elapsed time: 26.6629 seconds Total objective function evaluation time: 13.4353 Best observed feasible point: KernelScale Lambda Epsilon ___________ ________ _______ 0.0029885 0.039099 6.3938 Observed objective function value = 4.1362 Estimated objective function value = 4.1364 Function evaluation time = 0.26969 Best estimated feasible point (according to models): KernelScale Lambda Epsilon ___________ ________ _______ 0.0029885 0.039099 6.3938 Estimated objective function value = 4.1364 Estimated function evaluation time = 0.31488
Mdl = RegressionKernel ResponseName: 'Y' Learner: 'svm' NumExpansionDimensions: 256 KernelScale: 0.0030 Lambda: 0.0391 BoxConstraint: 0.0652 Epsilon: 6.3938 Properties, Methods
FitInfo =struct with fields:Solver: 'LBFGS-fast' LossFunction: 'epsiloninsensitive' Lambda: 0.0391 BetaTolerance: 1.0000e-04 GradientTolerance: 1.0000e-06 ObjectiveValue: 1.7716 GradientMagnitude: 0.0051 RelativeChangeInBeta: 8.5572e-05 FitTime: 0.0237 History: []
HyperparameterOptimizationResults = BayesianOptimization with properties: ObjectiveFcn: @createObjFcn/inMemoryObjFcn VariableDescriptions: [5x1 optimizableVariable] Options: [1x1 struct] MinObjective: 4.1362 XAtMinObjective: [1x3 table] MinEstimatedObjective: 4.1364 XAtMinEstimatedObjective: [1x3 table] NumObjectiveEvaluations: 30 TotalElapsedTime: 26.6629 NextPoint: [1x3 table] XTrace: [30x3 table] ObjectiveTrace: [30x1 double] ConstraintsTrace: [] UserDataTrace: {30x1 cell} ObjectiveEvaluationTimeTrace: [30x1 double] IterationTimeTrace: [30x1 double] ErrorTrace: [30x1 double] FeasibilityTrace: [30x1 logical] FeasibilityProbabilityTrace: [30x1 double] IndexOfMinimumTrace: [30x1 double] ObjectiveMinimumTrace: [30x1 double] EstimatedObjectiveMinimumTrace: [30x1 double]
For big data, the optimization procedure can take a long time. If the data set is too large to run the optimization procedure, you can try to optimize the parameters using only partial data. Use thedatasample
function and specify'Replace','false'
to sample data without replacement.
X
—Predictor dataPredictor data to which the regression model is fit, specified as ann-by-pnumeric matrix, wherenis the number of observations andpis the number of predictor variables.
The length ofY
and the number of observations inX
must be equal.
Data Types:single
|double
Tbl
—Sample dataSample data used to train the model, specified as a table. Each row ofTbl
corresponds to one observation, and each column corresponds to one predictor variable. Optionally,Tbl
can contain one additional column for the response variable. Multicolumn variables and cell arrays other than cell arrays of character vectors are not allowed.
IfTbl
contains the response variable, and you want to use all remaining variables inTbl
as predictors, then specify the response variable by usingResponseVarName
.
IfTbl
contains the response variable, and you want to use only a subset of the remaining variables inTbl
as predictors, then specify a formula by usingformula
.
IfTbl
does not contain the response variable, then specify a response variable by usingY
. The length of the response variable and the number of rows inTbl
must be equal.
Data Types:table
ResponseVarName
—Response variable nameTbl
Response variable name, specified as the name of a variable inTbl
. The response variable must be a numeric vector.
You must specifyResponseVarName
as a character vector or string scalar. For example, ifTbl
stores the response variableY
asTbl.Y
, then specify it as'Y'
. Otherwise, the software treats all columns ofTbl
, includingY
, as predictors when training the model.
Data Types:char
|string
formula
—Explanatory model of response variable and subset of predictor variablesExplanatory model of the response variable and a subset of the predictor variables, specified as a character vector or string scalar in the form"Y~x1+x2+x3"
. In this form,Y
represents the response variable, andx1
,x2
, andx3
represent the predictor variables.
To specify a subset of variables inTbl
as predictors for training the model, use a formula. If you specify a formula, then the software does not use any variables inTbl
that do not appear informula
.
The variable names in the formula must be both variable names inTbl
(Tbl.Properties.VariableNames
)and valid MATLAB®identifiers. You can verify the variable names inTbl
by using theisvarname
function. If the variable names are not valid, then you can convert them by using thematlab.lang.makeValidName
function.
Data Types:char
|string
Note
The software treatsNaN
, empty character vector (''
), empty string (""
),
, and
elements as missing values, and removes observations with any of these characteristics:
Missing value in the response variable
At least one missing value in a predictor observation (row inX
orTbl
)
NaN
value or0
weight ('Weights'
)
Specify optional comma-separated pairs ofName,Value
arguments.Name
is the argument name andValue
is the corresponding value.Name
must appear inside quotes. You can specify several name and value pair arguments in any order asName1,Value1,...,NameN,ValueN
.
Mdl = fitrkernel (X, Y,“学习者”,“leastsquares”、“NumExpansionDimensions',2^15,'KernelScale','auto')
implements least-squares regression after mapping the predictor data to the2^15
dimensional space using feature expansion with a kernel scale parameter selected by a heuristic procedure.
Note
You cannot use any cross-validation name-value argument together with the'OptimizeHyperparameters'
name-value argument. You can modify the cross-validation for'OptimizeHyperparameters'
only by using the'HyperparameterOptimizationOptions'
name-value argument.
BoxConstraint
—Box constraint1
(default) |positive scalarBox constraint, specified as the comma-separated pair consisting of'BoxConstraint'
and a positive scalar.
This argument is valid only when'Learner'
is'svm'
(default) and you do not specify a value for the regularization term strength'Lambda'
. You can specify either'BoxConstraint'
or'Lambda'
because the box constraint (C)and the regularization term strength (λ)are related byC= 1/(λn), wherenis the number of observations (rows inX
).
Example:'BoxConstraint',100
Data Types:single
|double
Epsilon
—Half width of epsilon-insensitive band'auto'
(default) |nonnegative scalar valueHalf the width of the epsilon-insensitive band, specified as the comma-separated pair consisting of'Epsilon'
and'auto'
or a nonnegative scalar value.
For'auto'
, thefitrkernel
function determines the value ofEpsilon
asiqr(Y)/13.49
, which is an estimate of a tenth of the standard deviation using the interquartile range of the response variableY
. Ifiqr(Y)
is equal to zero, thenfitrkernel
sets the value ofEpsilon
to 0.1.
'Epsilon'
is valid only whenLearner
issvm
.
Example:'Epsilon',0.3
Data Types:single
|double
NumExpansionDimensions
—Number of dimensions of expanded space'auto'
(default) |positive integerNumber of dimensions of the expanded space, specified as the comma-separated pair consisting of'NumExpansionDimensions'
and'auto'
or a positive integer. For'auto'
, thefitrkernel
function selects the number of dimensions using2.^ceil(min(log2(p)+5,15))
, wherep
is the number of predictors.
Example:'NumExpansionDimensions',2^15
Data Types:char
|string
|single
|double
KernelScale
—Kernel scale parameter1
(default) |'auto'
|positive scalar内核尺度参数,指定comma-separated pair consisting of'KernelScale'
and'auto'
or a positive scalar. MATLAB obtains the random basis for random feature expansion by using the kernel scale parameter. For details, seeRandom Feature Expansion.
If you specify'auto'
, then MATLAB selects an appropriate kernel scale parameter using a heuristic procedure. This heuristic procedure uses subsampling, so estimates can vary from one call to another. Therefore, to reproduce results, set a random number seed by usingrng
before training.
Example:'KernelScale','auto'
Data Types:char
|string
|single
|double
Lambda
—Regularization term strength'auto'
(default) |nonnegative scalarRegularization term strength, specified as the comma-separated pair consisting of'Lambda'
and'auto'
or a nonnegative scalar.
For'auto'
, the value of'Lambda'
is 1/n, wherenis the number of observations (rows inX
).
You can specify either'BoxConstraint'
or'Lambda'
because the box constraint (C)and the regularization term strength (λ)are related byC= 1/(λn).
Example:'Lambda',0.01
Data Types:char
|string
|single
|double
Learner
—Linear regression model type'svm'
(default) |'leastsquares'
Linear regression model type, specified as the comma-separated pair consisting of'Learner'
and'svm'
or'leastsquares'
.
In the following table,
xis an observation (row vector) fromppredictor variables.
is a transformation of an observation (row vector) for feature expansion.T(x)mapsxin to a high-dimensional space ( ).
βis a vector ofmcoefficients.
bis the scalar bias.
Value | Algorithm | Response range | Loss function |
---|---|---|---|
'leastsquares' |
Linear regression via ordinary least squares | y∊ (-∞,∞) | Mean squared error (MSE): |
'svm' |
Support vector machine regression | Same as'leastsquares' |
Epsilon-insensitive: |
Example:'Learner','leastsquares'
Verbose
—Verbosity level0
(default) |1
Verbosity level, specified as the comma-separated pair consisting of'Verbose'
and either0
or1
.Verbose
controls the amount of diagnostic informationfitrkernel
displays at the command line.
Value | Description |
---|---|
0 |
fitrkernel does not display diagnostic information. |
1 |
fitrkernel displays and stores the value of the objective function, gradient magnitude, and other diagnostic information.FitInfo.History contains the diagnostic information. |
Example:'Verbose',1
Data Types:single
|double
BlockSize
—Maximum amount of allocated memory4e^3
(4GB)(default) |positive scalarMaximum amount of allocated memory (in megabytes), specified as the comma-separated pair consisting of'BlockSize'
and a positive scalar.
Iffitrkernel
requires more memory than the value ofBlockSize
to hold the transformed predictor data, then MATLAB uses a block-wise strategy. For details about the block-wise strategy, seeAlgorithms.
Example:'BlockSize',1e4
Data Types:single
|double
RandomStream
—Random number streamRandom number stream for reproducibility of data transformation, specified as the comma-separated pair consisting of'RandomStream'
and a random stream object. For details, seeRandom Feature Expansion.
Use'RandomStream'
to reproduce the random basis functions thatfitrkernel
uses to transform the data inX
to a high-dimensional space. For details, seeManaging the Global Stream Using RandStreamandCreating and Controlling a Random Number Stream.
Example:'RandomStream',RandStream('mlfg6331_64')
CategoricalPredictors
—Categorical predictors list'all'
Categorical predictors list, specified as one of the values in this table.
Value | Description |
---|---|
Vector of positive integers | Each entry in the vector is an index value indicating that the corresponding predictor is categorical. The index values are between 1 and If |
Logical vector | A |
Character matrix | Each row of the matrix is the name of a predictor variable. The names must match the entries inPredictorNames . Pad the names with extra blanks so each row of the character matrix has the same length. |
String array or cell array of character vectors | Each element in the array is the name of a predictor variable. The names must match the entries inPredictorNames . |
"all" |
All predictors are categorical. |
By default, if the predictor data is in a table (Tbl
),fitrkernel
assumes that a variable is categorical if it is a logical vector, categorical vector, character array, string array, or cell array of character vectors. If the predictor data is a matrix (X
),fitrkernel
assumes that all predictors are continuous. To identify any other predictors as categorical predictors, specify them by using the'CategoricalPredictors'
name-value argument.
For the identified categorical predictors,fitrkernel
creates dummy variables using two different schemes, depending on whether a categorical variable is unordered or ordered. For an unordered categorical variable,fitrkernel
creates one dummy variable for each level of the categorical variable. For an ordered categorical variable,fitrkernel
creates one less dummy variable than the number of categories. For details, seeAutomatic Creation of Dummy Variables.
Example:'CategoricalPredictors','all'
Data Types:single
|double
|logical
|char
|string
|cell
PredictorNames
—Predictor variable namesPredictor variable names, specified as a string array of unique names or cell array of unique character vectors. The functionality ofPredictorNames
depends on the way you supply the training data.
If you supplyX
andY
, then you can usePredictorNames
to assign names to the predictor variables inX
.
The order of the names inPredictorNames
must correspond to the column order ofX
. That is,PredictorNames{1}
is the name ofX(:,1)
,PredictorNames{2}
is the name ofX(:,2)
, and so on. Also,size(X,2)
andnumel(PredictorNames)
must be equal.
By default,PredictorNames
is{'x1','x2',...}
.
If you supplyTbl
, then you can usePredictorNames
to choose which predictor variables to use in training. That is,fitrkernel
uses only the predictor variables inPredictorNames
and the response variable during training.
PredictorNames
must be a subset ofTbl.Properties.VariableNames
and cannot include the name of the response variable.
By default,PredictorNames
contains the names of all predictor variables.
A good practice is to specify the predictors for training using eitherPredictorNames
orformula
, but not both.
Example:"PredictorNames",["SepalLength","SepalWidth","PetalLength","PetalWidth"]
Data Types:string
|cell
ResponseName
—Response variable name"Y"
(default) |character vector|string scalarResponse variable name, specified as a character vector or string scalar.
If you supplyY
, then you can useResponseName
to specify a name for the response variable.
If you supplyResponseVarName
orformula
, then you cannot useResponseName
.
Example:"ResponseName","response"
Data Types:char
|string
ResponseTransform
—Response transformation'none'
(default) |function handleResponse transformation, specified as either'none'
or a function handle. The default is'none'
, which means@(y)y
, or no transformation. For a MATLAB function or a function you define, use its function handle for the response transformation. The function handle must accept a vector (the original response values) and return a vector of the same size (the transformed response values).
Example:Suppose you create a function handle that applies an exponential transformation to an input vector by usingmyfunction = @(y)exp(y)
. Then, you can specify the response transformation as'ResponseTransform',myfunction
.
Data Types:char
|string
|function_handle
Weights
—Observation weightsTbl
Observation weights, specified as the comma-separated pair consisting of'Weights'
and a vector of scalar values or the name of a variable inTbl
. The software weights each observation (or row) inX
orTbl
with the corresponding value inWeights
. The length ofWeights
must equal the number of rows inX
orTbl
.
If you specify the input data as a tableTbl
, thenWeights
can be the name of a variable inTbl
that contains a numeric vector. In this case, you must specifyWeights
as a character vector or string scalar. For example, if weights vectorW
is stored asTbl.W
, then specify it as'W'
. Otherwise, the software treats all columns ofTbl
, includingW
, as predictors when training the model.
By default,Weights
isones(n,1)
, wheren
is the number of observations inX
orTbl
.
fitrkernel
normalizes the weights to sum to 1.
Data Types:single
|double
|char
|string
CrossVal
—Cross-validation flag'off'
(default) |'on'
Cross-validation flag, specified as the comma-separated pair consisting of'Crossval'
and'on'
or'off'
.
If you specify'on'
, then the software implements 10-fold cross-validation.
You can override this cross-validation setting using theCVPartition
,Holdout
,KFold
, orLeaveout
name-value pair argument. You can use only one cross-validation name-value pair argument at a time to create a cross-validated model.
Example:'Crossval','on'
CVPartition
—Cross-validation partition[]
(default) |cvpartition
partition objectCross-validation partition, specified as acvpartition
partition object created bycvpartition
. The partition object specifies the type of cross-validation and the indexing for the training and validation sets.
To create a cross-validated model, you can specify only one of these four name-value arguments:CVPartition
,Holdout
,KFold
, orLeaveout
.
Example:Suppose you create a random partition for 5-fold cross-validation on 500 observations by usingcvp = cvpartition(500,'KFold',5)
. Then, you can specify the cross-validated model by using'CVPartition',cvp
.
Holdout
—Fraction of data for holdout validationFraction of the data used for holdout validation, specified as a scalar value in the range (0,1). If you specify'Holdout',p
, then the software completes these steps:
Randomly select and reservep * 100
% of the data as validation data, and train the model using the rest of the data.
Store the compact, trained model in theTrained
property of the cross-validated model.
To create a cross-validated model, you can specify only one of these four name-value arguments:CVPartition
,Holdout
,KFold
, orLeaveout
.
Example:'Holdout',0.1
Data Types:double
|single
KFold
—Number of folds10
(default) |positive integer value greater than 1Number of folds to use in a cross-validated model, specified as a positive integer value greater than 1. If you specify'KFold',k
, then the software completes these steps:
Randomly partition the data intok
sets.
For each set, reserve the set as validation data, and train the model using the otherk
– 1sets.
Store thek
compact, trained models in ak
-by-1 cell vector in theTrained
property of the cross-validated model.
To create a cross-validated model, you can specify only one of these four name-value arguments:CVPartition
,Holdout
,KFold
, orLeaveout
.
Example:'KFold',5
Data Types:single
|double
Leaveout
—Leave-one-out cross-validation flag'off'
(default) |'on'
Leave-one-out cross-validation flag, specified as the comma-separated pair consisting of'Leaveout'
and'on'
or'off'
. If you specify'Leaveout','on'
, then, for each of thenobservations (wherenis the number of observations excluding missing observations), the software completes these steps:
Reserve the observation as validation data, and train the model using the othern– 1 observations.
Store thencompact, trained models in the cells of ann-by-1 cell vector in theTrained
property of the cross-validated model.
To create a cross-validated model, you can use one of these four name-value pair arguments only:CVPartition
,Holdout
,KFold
, orLeaveout
.
Example:'Leaveout','on'
BetaTolerance
—Relative tolerance on linear coefficients and bias term1e-5
(default) |nonnegative scalarRelative tolerance on the linear coefficients and the bias term (intercept), specified as the comma-separated pair consisting of'BetaTolerance'
and a nonnegative scalar.
Let , that is, the vector of the coefficients and the bias term at optimization iterationt. If , then optimization terminates.
If you also specifyGradientTolerance
, then optimization terminates when the software satisfies either stopping criterion.
Example:'BetaTolerance',1e-6
Data Types:single
|double
GradientTolerance
—Absolute gradient tolerance1e-6
(default) |nonnegative scalarAbsolute gradient tolerance, specified as the comma-separated pair consisting of'GradientTolerance'
and a nonnegative scalar.
Let 是目标函数的梯度向量with respect to the coefficients and bias term at optimization iterationt. If , then optimization terminates.
If you also specifyBetaTolerance
, then optimization terminates when the software satisfies either stopping criterion.
Example:'GradientTolerance',1e-5
Data Types:single
|double
HessianHistorySize
—Size of history buffer for Hessian approximation15
(default) |positive integerSize of the history buffer for Hessian approximation, specified as the comma-separated pair consisting of'HessianHistorySize'
and a positive integer. At each iteration,fitrkernel
composes the Hessian by using statistics from the latestHessianHistorySize
iterations.
Example:'HessianHistorySize',10
Data Types:single
|double
IterationLimit
—Maximum number of optimization iterationsMaximum number of optimization iterations, specified as the comma-separated pair consisting of'IterationLimit'
and a positive integer.
The default value is 1000 if the transformed data fits in memory, as specified byBlockSize
. Otherwise, the default value is 100.
Example:'IterationLimit',500
Data Types:single
|double
OptimizeHyperparameters
—Parameters to optimize'none'
(default) |'auto'
|'all'
|string array or cell array of eligible parameter names|vector ofoptimizableVariable
objectsParameters to optimize, specified as the comma-separated pair consisting of'OptimizeHyperparameters'
and one of these values:
'none'
— Do not optimize.
'auto'
— Use{'KernelScale','Lambda','Epsilon'}
.
'all'
— Optimize all eligible parameters.
Cell array of eligible parameter names.
Vector ofoptimizableVariable
objects, typically the output ofhyperparameters
.
The optimization attempts to minimize the cross-validation loss (error) forfitrkernel
by varying the parameters. To control the cross-validation type and other aspects of the optimization, use theHyperparameterOptimizationOptions
name-value pair argument.
Note
The values of'OptimizeHyperparameters'
override any values you specify using other name-value arguments. For example, setting'OptimizeHyperparameters'
to'auto'
causesfitrkernel
to optimize hyperparameters corresponding to the'auto'
option and to ignore any specified values for the hyperparameters.
The eligible parameters forfitrkernel
are:
Epsilon
—fitrkernel
searches among positive values, by default log-scaled in the range[1e-3,1e2]*iqr(Y)/1.349
.
KernelScale
—fitrkernel
searches among positive values, by default log-scaled in the range[1e-3,1e3]
.
Lambda
—fitrkernel
searches among positive values, by default log-scaled in the range[1e-3,1e3]/n
, wheren
is the number of observations.
Learner
—fitrkernel
searches among'svm'
and'leastsquares'
.
NumExpansionDimensions
—fitrkernel
searches among positive integers, by default log-scaled in the range[100,10000]
.
Set nondefault parameters by passing a vector ofoptimizableVariable
objects that have nondefault values. For example:
loadcarsmall参数个数= hyperparameters('fitrkernel',[Horsepower,Weight],MPG); params(2).Range = [1e-4,1e6];
Pass参数个数
as the value of'OptimizeHyperparameters'
.
By default, the iterative display appears at the command line, and plots appear according to the number of hyperparameters in the optimization. For the optimization and plots, the objective function islog(1 + cross-validation loss). To control the iterative display, set theVerbose
field of the'HyperparameterOptimizationOptions'
name-value argument. To control the plots, set theShowPlots
field of the'HyperparameterOptimizationOptions'
name-value argument.
For an example, seeOptimize Kernel Regression.
Example:'OptimizeHyperparameters','auto'
HyperparameterOptimizationOptions
—Options for optimizationOptions for optimization, specified as a structure. This argument modifies the effect of theOptimizeHyperparameters
name-value argument. All fields in the structure are optional.
Field Name | Values | Default |
---|---|---|
Optimizer |
|
'bayesopt' |
AcquisitionFunctionName |
Acquisition functions whose names include |
'expected-improvement-per-second-plus' |
MaxObjectiveEvaluations |
Maximum number of objective function evaluations. | 30 for'bayesopt' and'randomsearch' , and the entire grid for'gridsearch' |
MaxTime |
Time limit, specified as a positive real scalar. The time limit is in seconds, as measured by |
Inf |
NumGridDivisions |
For'gridsearch' , the number of values in each dimension. The value can be a vector of positive integers giving the number of values for each dimension, or a scalar that applies to all dimensions. This field is ignored for categorical variables. |
10 |
ShowPlots |
Logical value indicating whether to show plots. Iftrue , this field plots the best observed objective function value against the iteration number. If you use Bayesian optimization (Optimizer is'bayesopt' ), then this field also plots the best estimated objective function value. The best observed objective function values and best estimated objective function values correspond to the values in theBestSoFar (observed) andBestSoFar (estim.) columns of the iterative display, respectively. You can find these values in the propertiesObjectiveMinimumTrace andEstimatedObjectiveMinimumTrace ofMdl.HyperparameterOptimizationResults . If the problem includes one or two optimization parameters for Bayesian optimization, thenShowPlots 再次也块模型的目标函数st the parameters. |
true |
SaveIntermediateResults |
Logical value indicating whether to save results whenOptimizer is'bayesopt' . Iftrue , this field overwrites a workspace variable named'BayesoptResults' at each iteration. The variable is aBayesianOptimization object. |
false |
Verbose |
Display at the command line:
For details, see the |
1 |
UseParallel |
Logical value indicating whether to run Bayesian optimization in parallel, which requires Parallel Computing Toolbox™. Due to the nonreproducibility of parallel timing, parallel Bayesian optimization does not necessarily yield reproducible results. For details, seeParallel Bayesian Optimization. | false |
Repartition |
Logical value indicating whether to repartition the cross-validation at every iteration. If this field is The setting |
false |
Use no more than one of the following three options. | ||
CVPartition |
Acvpartition object, as created bycvpartition |
'Kfold',5 if you do not specify a cross-validation field |
Holdout |
A scalar in the range(0,1) representing the holdout fraction |
|
Kfold |
An integer greater than 1 |
Example:'HyperparameterOptimizationOptions',struct('MaxObjectiveEvaluations',60)
Data Types:struct
Mdl
— Trained kernel regression modelRegressionKernel
model object |RegressionPartitionedKernel
cross-validated model objectTrained kernel regression model, returned as aRegressionKernel
model object orRegressionPartitionedKernel
cross-validated model object.
If you set any of the name-value pair argumentsCrossVal
,CVPartition
,Holdout
,KFold
, orLeaveout
, thenMdl
is aRegressionPartitionedKernel
cross-validated model. Otherwise,Mdl
is aRegressionKernel
model.
To reference properties ofMdl
, use dot notation. For example, enterMdl.NumExpansionDimensions
in the Command Window to display the number of dimensions of the expanded space.
Note
Unlike other regression models, and for economical memory usage, aRegressionKernel
model object does not store the training data or training process details (for example, convergence history).
FitInfo
— Optimization detailsOptimization details, returned as a structure array including fields described in this table. The fields contain final values or name-value pair argument specifications.
Field | Description |
---|---|
Solver |
Objective function minimization technique: |
LossFunction |
Loss function. Either mean squared error (MSE) or epsilon-insensitive, depending on the type of linear regression model. SeeLearner . |
Lambda |
Regularization term strength. SeeLambda . |
BetaTolerance |
Relative tolerance on the linear coefficients and the bias term. SeeBetaTolerance . |
GradientTolerance |
Absolute gradient tolerance. SeeGradientTolerance . |
ObjectiveValue |
Value of the objective function when optimization terminates. The regression loss plus the regularization term compose the objective function. |
GradientMagnitude |
Infinite norm of the gradient vector of the objective function when optimization terminates. SeeGradientTolerance . |
RelativeChangeInBeta |
Relative changes in the linear coefficients and the bias term when optimization terminates. SeeBetaTolerance . |
FitTime |
Elapsed, wall-clock time (in seconds) required to fit the model to the data. |
History |
History of optimization information. This field also includes the optimization information from trainingMdl . This field is empty ([] )if you specify'Verbose',0 . For details, seeVerbose andAlgorithms. |
To access fields, use dot notation. For example, to access the vector of objective function values for each iteration, enterFitInfo.ObjectiveValue
in the Command Window.
Examine the information provided byFitInfo
to assess whether convergence is satisfactory.
HyperparameterOptimizationResults
— Cross-validation optimization of hyperparametersBayesianOptimization
object | table of hyperparameters and associated valuesCross-validation optimization of hyperparameters, returned as aBayesianOptimization
object or a table of hyperparameters and associated values. The output is nonempty when the value of'OptimizeHyperparameters'
is not'none'
. The output value depends on theOptimizer
field value of the'HyperparameterOptimizationOptions'
name-value pair argument:
Value ofOptimizer Field |
Value ofHyperparameterOptimizationResults |
---|---|
'bayesopt' (default) |
Object of classBayesianOptimization |
'gridsearch' or'randomsearch' |
Table of hyperparameters used, observed objective function values (cross-validation loss), and rank of observations from lowest (best) to highest (worst) |
fitrkernel
does not accept initial conditions for the linear coefficients beta (β)and bias term (b)used to determine the decision function,
fitrkernel
does not support standardization.
Random feature expansion, such as Random Kitchen Sinks[1]and Fastfood[2], is a scheme to approximate Gaussian kernels of the kernel regression algorithm for big data in a computationally efficient way. Random feature expansion is more practical for big data applications that have large training sets but can also be applied to smaller data sets that fit in memory.
The kernel regression algorithm searches for an optimal function that deviates from each response data point (yi)by values no greater than the epsilon margin (ε)after mapping the predictor data into a high-dimensional space.
Some regression problems cannot be described adequately using a linear model. In such cases, obtain a nonlinear regression model by replacing the dot productx1x2′with a nonlinear kernel function , wherexiis theith observation (row vector) andφ(xi)is a transformation that mapsxito a high-dimensional space (called the “kernel trick”). However, evaluatingG(x1,x2), the Gram matrix, for each pair of observations is computationally expensive for a large data set (largen).
The random feature expansion scheme finds a random transformation so that its dot product approximates the Gaussian kernel. That is,
whereT(x)mapsxin to a high-dimensional space ( ). The Random Kitchen Sink[1]scheme uses the random transformation
where
is a sample drawn from
andσ2is a kernel scale. This scheme requiresO(mp)computation and storage. The Fastfood[2]scheme introduces another random basisVinstead ofZusing Hadamard matrices combined with Gaussian scaling matrices. This random basis reduces computation cost toO(mlog
p)and reduces storage toO(m).
You can specify values formandσ2, using theNumExpansionDimensions
andKernelScale
name-value pair arguments offitrkernel
, respectively.
Thefitrkernel
function uses the Fastfood scheme for random feature expansion and uses linear regression to train a Gaussian kernel regression model. Unlike solvers in thefitrsvm
function, which require computation of then-by-nGram matrix, the solver infitrkernel
only needs to form a matrix of sizen-by-m, withmtypically much less thannfor big data.
A box constraint is a parameter that controls the maximum penalty imposed on observations that lie outside the epsilon margin (ε), and helps to prevent overfitting (regularization). Increasing the box constraint can lead to longer training times.
The box constraint (C)and the regularization term strength (λ)are related byC= 1/(λn), wherenis the number of observations.
fitrkernel
minimizes the regularized objective function using a Limited-memory Broyden-Fletcher-Goldfarb-Shanno (LBFGS) solver with ridge (L2)regularization. To find the type of LBFGS solver used for training, typeFitInfo.Solver
in the Command Window.
'LBFGS-fast'
— LBFGS solver.
'LBFGS-blockwise'
— LBFGS solver with a block-wise strategy. Iffitrkernel
requires more memory than the value ofBlockSize
to hold the transformed predictor data, then it uses a block-wise strategy.
'LBFGS-tall'
— LBFGS solver with a block-wise strategy for tall arrays.
Whenfitrkernel
uses a block-wise strategy,fitrkernel
implements LBFGS by distributing the calculation of the loss and gradient among different parts of the data at each iteration. Also,fitrkernel
refines the initial estimates of the linear coefficients and the bias term by fitting the model locally to parts of the data and combining the coefficients by averaging. If you specify'Verbose',1
, thenfitrkernel
displays diagnostic information for each data pass and stores the information in theHistory
field ofFitInfo
.
Whenfitrkernel
does not use a block-wise strategy, the initial estimates are zeros. If you specify'Verbose',1
, thenfitrkernel
displays diagnostic information for each iteration and stores the information in theHistory
field ofFitInfo
.
[1] Rahimi, A., and B. Recht. “Random Features for Large-Scale Kernel Machines.”Advances in Neural Information Processing Systems. Vol. 20, 2008, pp. 1177–1184.
[2] Le, Q., T. Sarlós, and A. Smola. “Fastfood — Approximating Kernel Expansions in Loglinear Time.”Proceedings of the 30th International Conference on Machine Learning. Vol. 28, No. 3, 2013, pp. 244–252.
[3] Huang, P. S., H. Avron, T. N. Sainath, V. Sindhwani, and B. Ramabhadran. “Kernel methods match Deep Neural Networks on TIMIT.”2014 IEEE International Conference on Acoustics, Speech and Signal Processing. 2014, pp. 205–209.
Usage notes and limitations:
fitrkernel
does not support talltable
data.
Some name-value pair arguments have different defaults compared to the default values for the in-memoryfitrkernel
function. Supported name-value pair arguments, and any differences, are:
'BoxConstraint'
'Epsilon'
'NumExpansionDimensions'
'KernelScale'
'Lambda'
'Learner'
'Verbose'
— Default value is1
.
'BlockSize'
'RandomStream'
'ResponseTransform'
'Weights'
— Value must be a tall array.
'BetaTolerance'
— Default value is relaxed to1e–3
.
'GradientTolerance'
— Default value is relaxed to1e–5
.
'HessianHistorySize'
'IterationLimit'
— Default value is relaxed to20
.
'OptimizeHyperparameters'
'HyperparameterOptimizationOptions'
— For cross-validation, tall optimization supports only'Holdout'
validation. By default, the software selects and reserves 20% of the data as holdout validation data, and trains the model using the rest of the data. You can specify a different value for the holdout fraction by using this argument. For example, specify'HyperparameterOptimizationOptions',struct('Holdout',0.3)
to reserve 30% of the data as validation data.
If'KernelScale'
is'auto'
, thenfitrkernel
uses the random stream controlled bytallrng
for subsampling. For reproducibility, you must set a random number seed for both the global stream and the random stream controlled bytallrng
.
If'Lambda'
is'auto'
, thenfitrkernel
might take an extra pass through the data to calculate the number of observations inX
.
fitrkernel
uses a block-wise strategy. For details, seeAlgorithms.
For more information, seeTall Arrays.
To perform parallel hyperparameter optimization, use the'HyperparameterOptimizationOptions', struct('UseParallel',true)
name-value argument in the call to thefitrkernel
function.
For more information on parallel hyperparameter optimization, seeParallel Bayesian Optimization.
For general information about parallel computing, seeRun MATLAB Functions with Automatic Parallel Support(Parallel Computing Toolbox).
bayesopt
|bestPoint
|fitrlinear
|fitrsvm
|loss
|predict
|RegressionKernel
|resume
|RegressionPartitionedKernel
You have a modified version of this example. Do you want to open this example with your edits?
You clicked a link that corresponds to this MATLAB command:
Run the command by entering it in the MATLAB Command Window. Web browsers do not support MATLAB commands.
Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select:.
Selectweb siteYou can also select a web site from the following list:
Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.