Main Content

fitrkernel

Fit Gaussian kernel regression model using random feature expansion

Description

fitrkerneltrains or cross-validates a Gaussian kernel regression model for nonlinear regression.fitrkernel更实际的使用大数据应用程序that have large training sets, but can also be applied to smaller data sets that fit in memory.

fitrkernelmaps data in a low-dimensional space into a high-dimensional space, then fits a linear model in the high-dimensional space by minimizing the regularized objective function. Obtaining the linear model in the high-dimensional space is equivalent to applying the Gaussian kernel to the model in the low-dimensional space. Available linear regression models include regularized support vector machine (SVM) and least-squares regression models.

To train a nonlinear SVM regression model on in-memory data, seefitrsvm.

example

Mdl= fitrkernel(X,Yreturns a compact Gaussian kernel regression model trained using the predictor data inXand the corresponding responses inY.

Mdl= fitrkernel(Tbl,ResponseVarNamereturns a kernel regression modelMdltrained using the predictor variables contained in the tableTbland the response values inTbl.ResponseVarName.

Mdl= fitrkernel(Tbl,formulareturns a kernel regression model trained using the sample data in the tableTbl. The input argumentformulais an explanatory model of the response and a subset of predictor variables inTblused to fitMdl.

Mdl= fitrkernel(Tbl,Yreturns a kernel regression model using the predictor variables in the tableTbland the response values in vectorY.

example

Mdl= fitrkernel(___,Name,Valuespecifies options using one or more name-value pair arguments in addition to any of the input argument combinations in previous syntaxes. For example, you can implement least-squares regression, specify the number of dimension of the expanded space, or specify cross-validation options.

example

[Mdl,FitInfo] = fitrkernel(___also returns the fit information in the structure arrayFitInfousing any of the input arguments in the previous syntaxes. You cannot requestFitInfofor cross-validated models.

example

[Mdl,FitInfo,HyperparameterOptimizationResults] = fitrkernel(___also returns the hyperparameter optimization results when you optimize hyperparameters by using the'OptimizeHyperparameters'name-value pair argument.

Examples

collapse all

Train a kernel regression model for a tall array by using SVM.

When you perform calculations on tall arrays, MATLAB® uses either a parallel pool (default if you have Parallel Computing Toolbox™) or the local MATLAB session. To run the example using the local MATLAB session when you have Parallel Computing Toolbox, change the global execution environment by using themapreducerfunction.

mapreducer(0)

Create a datastore that references the folder location with the data. The data can be contained in a single file, a collection of files, or an entire folder. Treat'NA'values as missing data so thatdatastorereplaces them withNaNvalues. Select a subset of the variables to use. Create a tall table on top of the datastore.

varnames = {'ArrTime','DepTime','ActualElapsedTime'}; ds = datastore('airlinesmall.csv','TreatAsMissing','NA',...'SelectedVariableNames',varnames); t = tall(ds);

SpecifyDepTimeandArrTimeas the predictor variables (X)andActualElapsedTimeas the response variable (Y). Select the observations for whichArrTime是晚于DepTime.

daytime = t.ArrTime>t.DepTime; Y = t.ActualElapsedTime(daytime);% Response dataX = t{daytime,{'DepTime''ArrTime'}};% Predictor data

Standardize the predictor variables.

Z = zscore(X);% Standardize the data

Train a default Gaussian kernel regression model with the standardized predictors. Extract a fit summary to determine how well the optimization algorithm fits the model to the data.

[Mdl,FitInfo] = fitrkernel(Z,Y)
Found 6 chunks. |========================================================================= | Solver | Iteration / | Objective | Gradient | Beta relative | | | Data Pass | | magnitude | change | |========================================================================= | INIT | 0 / 1 | 4.307833e+01 | 4.345788e-02 | NaN | | LBFGS | 0 / 2 | 3.705713e+01 | 1.577301e-02 | 9.988252e-01 | | LBFGS | 1 / 3 | 3.704022e+01 | 3.082836e-02 | 1.338410e-03 | | LBFGS | 2 / 4 | 3.701398e+01 | 3.006488e-02 | 1.116070e-03 | | LBFGS | 2 / 5 | 3.698797e+01 | 2.870642e-02 | 2.234599e-03 | | LBFGS | 2 / 6 | 3.693687e+01 | 2.625581e-02 | 4.479069e-03 | | LBFGS | 2 / 7 | 3.683757e+01 | 2.239620e-02 | 8.997877e-03 | | LBFGS | 2 / 8 | 3.665038e+01 | 1.782358e-02 | 1.815682e-02 | | LBFGS | 3 / 9 | 3.473411e+01 | 4.074480e-02 | 1.778166e-01 | | LBFGS | 4 / 10 | 3.684246e+01 | 1.608942e-01 | 3.294968e-01 | | LBFGS | 4 / 11 | 3.441595e+01 | 8.587703e-02 | 1.420892e-01 | | LBFGS | 5 / 12 | 3.377755e+01 | 3.760006e-02 | 4.640134e-02 | | LBFGS | 6 / 13 | 3.357732e+01 | 1.912644e-02 | 3.842057e-02 | | LBFGS | 7 / 14 | 3.334081e+01 | 3.046709e-02 | 6.211243e-02 | | LBFGS | 8 / 15 | 3.309239e+01 | 3.858085e-02 | 6.411356e-02 | | LBFGS | 9 / 16 | 3.276577e+01 | 3.612292e-02 | 6.938579e-02 | | LBFGS | 10 / 17 | 3.234029e+01 | 2.734959e-02 | 1.144307e-01 | | LBFGS | 11 / 18 | 3.205763e+01 | 2.545990e-02 | 7.323180e-02 | | LBFGS | 12 / 19 | 3.183341e+01 | 2.472411e-02 | 3.689625e-02 | | LBFGS | 13 / 20 | 3.169307e+01 | 2.064613e-02 | 2.998555e-02 | |========================================================================= | Solver | Iteration / | Objective | Gradient | Beta relative | | | Data Pass | | magnitude | change | |========================================================================= | LBFGS | 14 / 21 | 3.146896e+01 | 1.788395e-02 | 5.967293e-02 | | LBFGS | 15 / 22 | 3.118171e+01 | 1.660696e-02 | 1.124062e-01 | | LBFGS | 16 / 23 | 3.106224e+01 | 1.506147e-02 | 7.947037e-02 | | LBFGS | 17 / 24 | 3.098395e+01 | 1.564561e-02 | 2.678370e-02 | | LBFGS | 18 / 25 | 3.096029e+01 | 4.464104e-02 | 4.547148e-02 | | LBFGS | 19 / 26 | 3.085475e+01 | 1.442800e-02 | 1.677268e-02 | | LBFGS | 20 / 27 | 3.078140e+01 | 1.906548e-02 | 2.275185e-02 | |========================================================================|
Mdl = RegressionKernel PredictorNames: {'x1' 'x2'} ResponseName: 'Y' Learner: 'svm' NumExpansionDimensions: 64 KernelScale: 1 Lambda: 8.5385e-06 BoxConstraint: 1 Epsilon: 5.9303 Properties, Methods
FitInfo =struct with fields:Solver: 'LBFGS-tall' LossFunction: 'epsiloninsensitive' Lambda: 8.5385e-06 BetaTolerance: 1.0000e-03 GradientTolerance: 1.0000e-05 ObjectiveValue: 30.7814 GradientMagnitude: 0.0191 RelativeChangeInBeta: 0.0228 FitTime: 50.0477 History: [1x1 struct]

Mdlis aRegressionKernelmodel. To inspect the regression error, you can passMdland the training data or new data to thelossfunction. Or, you can passMdland new predictor data to thepredictfunction to predict responses for new observations. You can also passMdland the training data to theresumefunction to continue training.

FitInfois a structure array containing optimization information. UseFitInfoto determine whether optimization termination measurements are satisfactory.

For improved accuracy, you can increase the maximum number of optimization iterations ('IterationLimit')and decrease the tolerance values ('BetaTolerance'and'GradientTolerance')by using the name-value pair arguments offitrkernel. Doing so can improve measures likeObjectiveValueandRelativeChangeInBetainFitInfo. You can also optimize model parameters by using the'OptimizeHyperparameters'name-value pair argument.

Load thecarbigdata set.

loadcarbig

Specify the predictor variables (X)and the response variable (Y).

X = [Acceleration,Cylinders,Displacement,Horsepower,Weight]; Y = MPG;

Delete rows ofXandYwhere either array hasNaNvalues. Removing rows withNaNvalues before passing data tofitrkernelcan speed up training and reduce memory usage.

R = rmmissing([X Y]);% Data with missing entries removedX = R(:,1:5); Y = R(:,end);

Standardize the predictor variables.

Z = zscore(X);

Cross-validate a kernel regression model using 5-fold cross-validation.

Mdl = fitrkernel(Z,Y,'Kfold',5)
Mdl = RegressionPartitionedKernel CrossValidatedModel: 'Kernel' ResponseName: 'Y' NumObservations: 392 KFold: 5 Partition: [1x1 cvpartition] ResponseTransform: 'none' Properties, Methods
numel(Mdl.Trained)
ans = 5

Mdlis aRegressionPartitionedKernelmodel. Becausefitrkernelimplements five-fold cross-validation,Mdlcontains fiveRegressionKernelmodels that the software trains on training-fold (in-fold) observations.

Examine the cross-validation loss (mean squared error) for each fold.

kfoldLoss(Mdl,'mode','individual'
ans =5×113.0610 14.0975 24.0104 21.1223 24.3979

Optimize hyperparameters automatically using the'OptimizeHyperparameters'name-value pair argument.

Load thecarbigdata set.

loadcarbig

Specify the predictor variables (X)and the response variable (Y).

X = [Acceleration,Cylinders,Displacement,Horsepower,Weight]; Y = MPG;

Delete rows ofXandYwhere either array hasNaNvalues. Removing rows withNaNvalues before passing data tofitrkernelcan speed up training and reduce memory usage.

R = rmmissing([X Y]);% Data with missing entries removedX = R(:,1:5); Y = R(:,end);

Standardize the predictor variables.

Z = zscore(X);

Find hyperparameters that minimize five-fold cross-validation loss by using automatic hyperparameter optimization. Specify'OptimizeHyperparameters'as'auto'so thatfitrkernelfinds the optimal values of the'KernelScale','Lambda', and'Epsilon'name-value pair arguments. For reproducibility, set the random seed and use the'expected-improvement-plus'acquisition function.

rng('default')[Mdl,FitInfo,HyperparameterOptimizationResults] = fitrkernel(Z,Y,'OptimizeHyperparameters','auto',...'HyperparameterOptimizationOptions',struct('AcquisitionFunctionName','expected-improvement-plus'))
|====================================================================================================================| | Iter | Eval | Objective: | Objective | BestSoFar | BestSoFar | KernelScale | Lambda | Epsilon | | | result | log(1+loss) | runtime | (observed) | (estim.) | | | | |====================================================================================================================| | 1 | Best | 4.8295 | 1.0202 | 4.8295 | 4.8295 | 0.011518 | 6.8068e-05 | 0.95918 | | 2 | Best | 4.1488 | 0.20075 | 4.1488 | 4.1855 | 477.57 | 0.066115 | 0.091828 | | 3 | Accept | 4.1521 | 0.23448 | 4.1488 | 4.1747 | 0.0080478 | 0.0052867 | 520.84 | | 4 | Accept | 4.1506 | 0.19343 | 4.1488 | 4.1488 | 0.10935 | 0.35931 | 0.013372 | | 5 | Best | 4.1446 | 0.22183 | 4.1446 | 4.1446 | 326.29 | 2.5457 | 0.22475 | | 6 | Accept | 4.1521 | 0.20642 | 4.1446 | 4.1447 | 932.16 | 0.19667 | 873.68 | | 7 | Accept | 4.1501 | 0.44117 | 4.1446 | 4.1461 | 0.052426 | 2.5402 | 0.051319 | | 8 | Best | 4.1408 | 0.2327 | 4.1408 | 4.1402 | 850.91 | 0.01462 | 0.37284 | | 9 | Accept | 4.1521 | 0.35173 | 4.1408 | 4.1427 | 0.019352 | 0.012035 | 63.493 | | 10 | Accept | 4.1521 | 0.21123 | 4.1408 | 4.1452 | 853.22 | 1.0698 | 55.679 | | 11 | Accept | 4.1521 | 0.34269 | 4.1408 | 4.1416 | 1.4548 | 0.022234 | 26.275 | | 12 | Accept | 4.1509 | 0.19879 | 4.1408 | 4.1469 | 877.82 | 0.0071133 | 0.012021 | | 13 | Accept | 4.1422 | 0.85467 | 4.1408 | 4.1455 | 944.08 | 0.011177 | 0.31055 | | 14 | Accept | 4.2032 | 0.29126 | 4.1408 | 4.1405 | 979.21 | 0.010842 | 13.776 | | 15 | Accept | 4.1438 | 0.20706 | 4.1408 | 4.1509 | 0.001234 | 0.018449 | 0.044225 | | 16 | Best | 4.1372 | 0.17378 | 4.1372 | 4.1511 | 1.7802 | 2.5477 | 0.014737 | | 17 | Accept | 4.1521 | 0.24019 | 4.1372 | 4.1466 | 0.0015946 | 2.5474 | 590.35 | | 18 | Accept | 4.1452 | 0.20963 | 4.1372 | 4.1464 | 0.058846 | 1.0766 | 0.20569 | | 19 | Accept | 4.1521 | 0.32159 | 4.1372 | 4.1461 | 2.187 | 2.5531e-06 | 278.92 | | 20 | Accept | 4.1451 | 0.32948 | 4.1372 | 4.1461 | 0.0050283 | 0.039894 | 0.14402 | |====================================================================================================================| | Iter | Eval | Objective: | Objective | BestSoFar | BestSoFar | KernelScale | Lambda | Epsilon | | | result | log(1+loss) | runtime | (observed) | (estim.) | | | | |====================================================================================================================| | 21 | Best | 4.1362 | 0.26969 | 4.1362 | 4.1426 | 0.0029885 | 0.039099 | 6.3938 | | 22 | Accept | 4.1521 | 0.20719 | 4.1362 | 4.1449 | 0.035949 | 0.038533 | 80.585 | | 23 | Accept | 4.1399 | 0.36116 | 4.1362 | 4.1446 | 50.001 | 0.095432 | 0.19954 | | 24 | Accept | 4.1487 | 0.52381 | 4.1362 | 4.1374 | 0.012199 | 0.089894 | 0.034773 | | 25 | Accept | 4.1521 | 0.22703 | 4.1362 | 4.1447 | 0.0011871 | 0.30153 | 425.89 | | 26 | Accept | 4.1466 | 0.45165 | 4.1362 | 4.145 | 0.0011773 | 0.052213 | 0.017592 | | 27 | Accept | 4.1418 | 0.17754 | 4.1362 | 4.145 | 7.556 | 1.655 | 0.016225 | | 28 | Accept | 4.1407 | 0.4172 | 4.1362 | 4.145 | 0.01201 | 1.6696 | 0.38806 | | 29 | Accept | 5.4153 | 3.9701 | 4.1362 | 4.1365 | 0.0010531 | 1.1032e-05 | 0.034083 | | 30 | Accept | 4.1521 | 0.34684 | 4.1362 | 4.1364 | 652.19 | 2.6286e-06 | 882.02 |

Figure contains an axes object. The axes object with title Min objective vs. Number of function evaluations contains 2 objects of type line. These objects represent Min observed objective, Estimated min objective.

__________________________________________________________ Optimization completed. MaxObjectiveEvaluations of 30 reached. Total function evaluations: 30 Total elapsed time: 26.6629 seconds Total objective function evaluation time: 13.4353 Best observed feasible point: KernelScale Lambda Epsilon ___________ ________ _______ 0.0029885 0.039099 6.3938 Observed objective function value = 4.1362 Estimated objective function value = 4.1364 Function evaluation time = 0.26969 Best estimated feasible point (according to models): KernelScale Lambda Epsilon ___________ ________ _______ 0.0029885 0.039099 6.3938 Estimated objective function value = 4.1364 Estimated function evaluation time = 0.31488
Mdl = RegressionKernel ResponseName: 'Y' Learner: 'svm' NumExpansionDimensions: 256 KernelScale: 0.0030 Lambda: 0.0391 BoxConstraint: 0.0652 Epsilon: 6.3938 Properties, Methods
FitInfo =struct with fields:Solver: 'LBFGS-fast' LossFunction: 'epsiloninsensitive' Lambda: 0.0391 BetaTolerance: 1.0000e-04 GradientTolerance: 1.0000e-06 ObjectiveValue: 1.7716 GradientMagnitude: 0.0051 RelativeChangeInBeta: 8.5572e-05 FitTime: 0.0237 History: []
HyperparameterOptimizationResults = BayesianOptimization with properties: ObjectiveFcn: @createObjFcn/inMemoryObjFcn VariableDescriptions: [5x1 optimizableVariable] Options: [1x1 struct] MinObjective: 4.1362 XAtMinObjective: [1x3 table] MinEstimatedObjective: 4.1364 XAtMinEstimatedObjective: [1x3 table] NumObjectiveEvaluations: 30 TotalElapsedTime: 26.6629 NextPoint: [1x3 table] XTrace: [30x3 table] ObjectiveTrace: [30x1 double] ConstraintsTrace: [] UserDataTrace: {30x1 cell} ObjectiveEvaluationTimeTrace: [30x1 double] IterationTimeTrace: [30x1 double] ErrorTrace: [30x1 double] FeasibilityTrace: [30x1 logical] FeasibilityProbabilityTrace: [30x1 double] IndexOfMinimumTrace: [30x1 double] ObjectiveMinimumTrace: [30x1 double] EstimatedObjectiveMinimumTrace: [30x1 double]

For big data, the optimization procedure can take a long time. If the data set is too large to run the optimization procedure, you can try to optimize the parameters using only partial data. Use thedatasamplefunction and specify'Replace','false'to sample data without replacement.

Input Arguments

collapse all

Predictor data to which the regression model is fit, specified as ann-by-pnumeric matrix, wherenis the number of observations andpis the number of predictor variables.

The length ofYand the number of observations inXmust be equal.

Data Types:single|double

Response data, specified as ann-dimensional numeric vector. The length ofYmust be equal to the number of observations inXorTbl.

Data Types:single|double

Sample data used to train the model, specified as a table. Each row ofTblcorresponds to one observation, and each column corresponds to one predictor variable. Optionally,Tblcan contain one additional column for the response variable. Multicolumn variables and cell arrays other than cell arrays of character vectors are not allowed.

  • IfTblcontains the response variable, and you want to use all remaining variables inTblas predictors, then specify the response variable by usingResponseVarName.

  • IfTblcontains the response variable, and you want to use only a subset of the remaining variables inTblas predictors, then specify a formula by usingformula.

  • IfTbldoes not contain the response variable, then specify a response variable by usingY. The length of the response variable and the number of rows inTblmust be equal.

Data Types:table

Response variable name, specified as the name of a variable inTbl. The response variable must be a numeric vector.

You must specifyResponseVarNameas a character vector or string scalar. For example, ifTblstores the response variableYasTbl.Y, then specify it as'Y'. Otherwise, the software treats all columns ofTbl, includingY, as predictors when training the model.

Data Types:char|string

Explanatory model of the response variable and a subset of the predictor variables, specified as a character vector or string scalar in the form"Y~x1+x2+x3". In this form,Yrepresents the response variable, andx1,x2, andx3represent the predictor variables.

To specify a subset of variables inTblas predictors for training the model, use a formula. If you specify a formula, then the software does not use any variables inTblthat do not appear informula.

The variable names in the formula must be both variable names inTbl(Tbl.Properties.VariableNames)and valid MATLAB®identifiers. You can verify the variable names inTblby using theisvarnamefunction. If the variable names are not valid, then you can convert them by using thematlab.lang.makeValidNamefunction.

Data Types:char|string

Note

The software treatsNaN, empty character vector (''), empty string (""),, andelements as missing values, and removes observations with any of these characteristics:

  • Missing value in the response variable

  • At least one missing value in a predictor observation (row inXorTbl

  • NaNvalue or0weight ('Weights'

Name-Value Arguments

Specify optional comma-separated pairs ofName,Valuearguments.Nameis the argument name andValueis the corresponding value.Namemust appear inside quotes. You can specify several name and value pair arguments in any order asName1,Value1,...,NameN,ValueN.

Example:Mdl = fitrkernel (X, Y,“学习者”,“leastsquares”、“NumExpansionDimensions',2^15,'KernelScale','auto')implements least-squares regression after mapping the predictor data to the2^15dimensional space using feature expansion with a kernel scale parameter selected by a heuristic procedure.

Note

You cannot use any cross-validation name-value argument together with the'OptimizeHyperparameters'name-value argument. You can modify the cross-validation for'OptimizeHyperparameters'only by using the'HyperparameterOptimizationOptions'name-value argument.

Kernel Regression Options

collapse all

Box constraint, specified as the comma-separated pair consisting of'BoxConstraint'and a positive scalar.

This argument is valid only when'Learner'is'svm'(default) and you do not specify a value for the regularization term strength'Lambda'. You can specify either'BoxConstraint'or'Lambda'because the box constraint (C)and the regularization term strength (λ)are related byC= 1/(λn, wherenis the number of observations (rows inX).

Example:'BoxConstraint',100

Data Types:single|double

Half the width of the epsilon-insensitive band, specified as the comma-separated pair consisting of'Epsilon'and'auto'or a nonnegative scalar value.

For'auto', thefitrkernelfunction determines the value ofEpsilonasiqr(Y)/13.49, which is an estimate of a tenth of the standard deviation using the interquartile range of the response variableY. Ifiqr(Y)is equal to zero, thenfitrkernelsets the value ofEpsilonto 0.1.

'Epsilon'is valid only whenLearnerissvm.

Example:'Epsilon',0.3

Data Types:single|double

Number of dimensions of the expanded space, specified as the comma-separated pair consisting of'NumExpansionDimensions'and'auto'or a positive integer. For'auto', thefitrkernelfunction selects the number of dimensions using2.^ceil(min(log2(p)+5,15)), wherepis the number of predictors.

Example:'NumExpansionDimensions',2^15

Data Types:char|string|single|double

内核尺度参数,指定comma-separated pair consisting of'KernelScale'and'auto'or a positive scalar. MATLAB obtains the random basis for random feature expansion by using the kernel scale parameter. For details, seeRandom Feature Expansion.

If you specify'auto', then MATLAB selects an appropriate kernel scale parameter using a heuristic procedure. This heuristic procedure uses subsampling, so estimates can vary from one call to another. Therefore, to reproduce results, set a random number seed by usingrngbefore training.

Example:'KernelScale','auto'

Data Types:char|string|single|double

Regularization term strength, specified as the comma-separated pair consisting of'Lambda'and'auto'or a nonnegative scalar.

For'auto', the value of'Lambda'is 1/n, wherenis the number of observations (rows inX).

You can specify either'BoxConstraint'or'Lambda'because the box constraint (C)and the regularization term strength (λ)are related byC= 1/(λn.

Example:'Lambda',0.01

Data Types:char|string|single|double

Linear regression model type, specified as the comma-separated pair consisting of'Learner'and'svm'or'leastsquares'.

In the following table, f ( x = T ( x β + b .

  • xis an observation (row vector) fromppredictor variables.

  • T ( · is a transformation of an observation (row vector) for feature expansion.T(xmapsxin p to a high-dimensional space ( m ).

  • βis a vector ofmcoefficients.

  • bis the scalar bias.

Value Algorithm Response range Loss function
'leastsquares' Linear regression via ordinary least squares y∊ (-∞,∞) Mean squared error (MSE): [ y , f ( x ] = 1 2 [ y f ( x ] 2
'svm' Support vector machine regression Same as'leastsquares' Epsilon-insensitive: [ y , f ( x ] = max [ 0 , | y f ( x | ε ]

Example:'Learner','leastsquares'

Verbosity level, specified as the comma-separated pair consisting of'Verbose'and either0or1.Verbosecontrols the amount of diagnostic informationfitrkerneldisplays at the command line.

Value Description
0 fitrkerneldoes not display diagnostic information.
1 fitrkerneldisplays and stores the value of the objective function, gradient magnitude, and other diagnostic information.FitInfo.Historycontains the diagnostic information.

Example:'Verbose',1

Data Types:single|double

Maximum amount of allocated memory (in megabytes), specified as the comma-separated pair consisting of'BlockSize'and a positive scalar.

Iffitrkernelrequires more memory than the value ofBlockSizeto hold the transformed predictor data, then MATLAB uses a block-wise strategy. For details about the block-wise strategy, seeAlgorithms.

Example:'BlockSize',1e4

Data Types:single|double

Random number stream for reproducibility of data transformation, specified as the comma-separated pair consisting of'RandomStream'and a random stream object. For details, seeRandom Feature Expansion.

Use'RandomStream'to reproduce the random basis functions thatfitrkerneluses to transform the data inXto a high-dimensional space. For details, seeManaging the Global Stream Using RandStreamandCreating and Controlling a Random Number Stream.

Example:'RandomStream',RandStream('mlfg6331_64')

Other Regression Options

collapse all

Categorical predictors list, specified as one of the values in this table.

Value Description
Vector of positive integers

Each entry in the vector is an index value indicating that the corresponding predictor is categorical. The index values are between 1 andp, wherepis the number of predictors used to train the model.

Iffitrkerneluses a subset of input variables as predictors, then the function indexes the predictors using only the subset. TheCategoricalPredictorsvalues do not count the response variable, observation weight variable, or any other variables that the function does not use.

Logical vector

Atrueentry means that the corresponding predictor is categorical. The length of the vector isp.

Character matrix Each row of the matrix is the name of a predictor variable. The names must match the entries inPredictorNames. Pad the names with extra blanks so each row of the character matrix has the same length.
String array or cell array of character vectors Each element in the array is the name of a predictor variable. The names must match the entries inPredictorNames.
"all" All predictors are categorical.

By default, if the predictor data is in a table (Tbl),fitrkernelassumes that a variable is categorical if it is a logical vector, categorical vector, character array, string array, or cell array of character vectors. If the predictor data is a matrix (X),fitrkernelassumes that all predictors are continuous. To identify any other predictors as categorical predictors, specify them by using the'CategoricalPredictors'name-value argument.

For the identified categorical predictors,fitrkernelcreates dummy variables using two different schemes, depending on whether a categorical variable is unordered or ordered. For an unordered categorical variable,fitrkernelcreates one dummy variable for each level of the categorical variable. For an ordered categorical variable,fitrkernelcreates one less dummy variable than the number of categories. For details, seeAutomatic Creation of Dummy Variables.

Example:'CategoricalPredictors','all'

Data Types:single|double|logical|char|string|cell

Predictor variable names, specified as a string array of unique names or cell array of unique character vectors. The functionality ofPredictorNamesdepends on the way you supply the training data.

  • If you supplyXandY, then you can usePredictorNamesto assign names to the predictor variables inX.

    • The order of the names inPredictorNamesmust correspond to the column order ofX. That is,PredictorNames{1}is the name ofX(:,1),PredictorNames{2}is the name ofX(:,2), and so on. Also,size(X,2)andnumel(PredictorNames)must be equal.

    • By default,PredictorNamesis{'x1','x2',...}.

  • If you supplyTbl, then you can usePredictorNamesto choose which predictor variables to use in training. That is,fitrkerneluses only the predictor variables inPredictorNamesand the response variable during training.

    • PredictorNamesmust be a subset ofTbl.Properties.VariableNamesand cannot include the name of the response variable.

    • By default,PredictorNamescontains the names of all predictor variables.

    • A good practice is to specify the predictors for training using eitherPredictorNamesorformula, but not both.

Example:"PredictorNames",["SepalLength","SepalWidth","PetalLength","PetalWidth"]

Data Types:string|cell

Response variable name, specified as a character vector or string scalar.

  • If you supplyY, then you can useResponseNameto specify a name for the response variable.

  • If you supplyResponseVarNameorformula, then you cannot useResponseName.

Example:"ResponseName","response"

Data Types:char|string

Response transformation, specified as either'none'or a function handle. The default is'none', which means@(y)y, or no transformation. For a MATLAB function or a function you define, use its function handle for the response transformation. The function handle must accept a vector (the original response values) and return a vector of the same size (the transformed response values).

Example:Suppose you create a function handle that applies an exponential transformation to an input vector by usingmyfunction = @(y)exp(y). Then, you can specify the response transformation as'ResponseTransform',myfunction.

Data Types:char|string|function_handle

Observation weights, specified as the comma-separated pair consisting of'Weights'and a vector of scalar values or the name of a variable inTbl. The software weights each observation (or row) inXorTblwith the corresponding value inWeights. The length ofWeightsmust equal the number of rows inXorTbl.

If you specify the input data as a tableTbl, thenWeightscan be the name of a variable inTblthat contains a numeric vector. In this case, you must specifyWeightsas a character vector or string scalar. For example, if weights vectorWis stored asTbl.W, then specify it as'W'. Otherwise, the software treats all columns ofTbl, includingW, as predictors when training the model.

By default,Weightsisones(n,1), wherenis the number of observations inXorTbl.

fitrkernelnormalizes the weights to sum to 1.

Data Types:single|double|char|string

Cross-Validation Options

collapse all

Cross-validation flag, specified as the comma-separated pair consisting of'Crossval'and'on'or'off'.

If you specify'on', then the software implements 10-fold cross-validation.

You can override this cross-validation setting using theCVPartition,Holdout,KFold, orLeaveoutname-value pair argument. You can use only one cross-validation name-value pair argument at a time to create a cross-validated model.

Example:'Crossval','on'

Cross-validation partition, specified as acvpartitionpartition object created bycvpartition. The partition object specifies the type of cross-validation and the indexing for the training and validation sets.

To create a cross-validated model, you can specify only one of these four name-value arguments:CVPartition,Holdout,KFold, orLeaveout.

Example:Suppose you create a random partition for 5-fold cross-validation on 500 observations by usingcvp = cvpartition(500,'KFold',5). Then, you can specify the cross-validated model by using'CVPartition',cvp.

Fraction of the data used for holdout validation, specified as a scalar value in the range (0,1). If you specify'Holdout',p, then the software completes these steps:

  1. Randomly select and reservep * 100% of the data as validation data, and train the model using the rest of the data.

  2. Store the compact, trained model in theTrainedproperty of the cross-validated model.

To create a cross-validated model, you can specify only one of these four name-value arguments:CVPartition,Holdout,KFold, orLeaveout.

Example:'Holdout',0.1

Data Types:double|single

Number of folds to use in a cross-validated model, specified as a positive integer value greater than 1. If you specify'KFold',k, then the software completes these steps:

  1. Randomly partition the data intoksets.

  2. For each set, reserve the set as validation data, and train the model using the otherk– 1sets.

  3. Store thekcompact, trained models in ak-by-1 cell vector in theTrainedproperty of the cross-validated model.

To create a cross-validated model, you can specify only one of these four name-value arguments:CVPartition,Holdout,KFold, orLeaveout.

Example:'KFold',5

Data Types:single|double

Leave-one-out cross-validation flag, specified as the comma-separated pair consisting of'Leaveout'and'on'or'off'. If you specify'Leaveout','on', then, for each of thenobservations (wherenis the number of observations excluding missing observations), the software completes these steps:

  1. Reserve the observation as validation data, and train the model using the othern– 1 observations.

  2. Store thencompact, trained models in the cells of ann-by-1 cell vector in theTrainedproperty of the cross-validated model.

To create a cross-validated model, you can use one of these four name-value pair arguments only:CVPartition,Holdout,KFold, orLeaveout.

Example:'Leaveout','on'

Convergence Controls

collapse all

Relative tolerance on the linear coefficients and the bias term (intercept), specified as the comma-separated pair consisting of'BetaTolerance'and a nonnegative scalar.

Let B t = [ β t b t ] , that is, the vector of the coefficients and the bias term at optimization iterationt. If B t B t 1 B t 2 < BetaTolerance , then optimization terminates.

If you also specifyGradientTolerance, then optimization terminates when the software satisfies either stopping criterion.

Example:'BetaTolerance',1e-6

Data Types:single|double

Absolute gradient tolerance, specified as the comma-separated pair consisting of'GradientTolerance'and a nonnegative scalar.

Let t 是目标函数的梯度向量with respect to the coefficients and bias term at optimization iterationt. If t = max | t | < GradientTolerance , then optimization terminates.

If you also specifyBetaTolerance, then optimization terminates when the software satisfies either stopping criterion.

Example:'GradientTolerance',1e-5

Data Types:single|double

Size of the history buffer for Hessian approximation, specified as the comma-separated pair consisting of'HessianHistorySize'and a positive integer. At each iteration,fitrkernelcomposes the Hessian by using statistics from the latestHessianHistorySizeiterations.

Example:'HessianHistorySize',10

Data Types:single|double

Maximum number of optimization iterations, specified as the comma-separated pair consisting of'IterationLimit'and a positive integer.

The default value is 1000 if the transformed data fits in memory, as specified byBlockSize. Otherwise, the default value is 100.

Example:'IterationLimit',500

Data Types:single|double

Hyperparameter Optimization Options

collapse all

Parameters to optimize, specified as the comma-separated pair consisting of'OptimizeHyperparameters'and one of these values:

  • 'none'— Do not optimize.

  • 'auto'— Use{'KernelScale','Lambda','Epsilon'}.

  • 'all'— Optimize all eligible parameters.

  • Cell array of eligible parameter names.

  • Vector ofoptimizableVariableobjects, typically the output ofhyperparameters.

The optimization attempts to minimize the cross-validation loss (error) forfitrkernelby varying the parameters. To control the cross-validation type and other aspects of the optimization, use theHyperparameterOptimizationOptionsname-value pair argument.

Note

The values of'OptimizeHyperparameters'override any values you specify using other name-value arguments. For example, setting'OptimizeHyperparameters'to'auto'causesfitrkernelto optimize hyperparameters corresponding to the'auto'option and to ignore any specified values for the hyperparameters.

The eligible parameters forfitrkernelare:

  • Epsilonfitrkernelsearches among positive values, by default log-scaled in the range[1e-3,1e2]*iqr(Y)/1.349.

  • KernelScalefitrkernelsearches among positive values, by default log-scaled in the range[1e-3,1e3].

  • Lambdafitrkernelsearches among positive values, by default log-scaled in the range[1e-3,1e3]/n, wherenis the number of observations.

  • Learnerfitrkernelsearches among'svm'and'leastsquares'.

  • NumExpansionDimensionsfitrkernelsearches among positive integers, by default log-scaled in the range[100,10000].

Set nondefault parameters by passing a vector ofoptimizableVariableobjects that have nondefault values. For example:

loadcarsmall参数个数= hyperparameters('fitrkernel',[Horsepower,Weight],MPG); params(2).Range = [1e-4,1e6];

Pass参数个数as the value of'OptimizeHyperparameters'.

By default, the iterative display appears at the command line, and plots appear according to the number of hyperparameters in the optimization. For the optimization and plots, the objective function islog(1 + cross-validation loss). To control the iterative display, set theVerbosefield of the'HyperparameterOptimizationOptions'name-value argument. To control the plots, set theShowPlotsfield of the'HyperparameterOptimizationOptions'name-value argument.

For an example, seeOptimize Kernel Regression.

Example:'OptimizeHyperparameters','auto'

Options for optimization, specified as a structure. This argument modifies the effect of theOptimizeHyperparametersname-value argument. All fields in the structure are optional.

Field Name Values Default
Optimizer
  • 'bayesopt'——使用贝叶斯优化。Internally, this setting callsbayesopt.

  • 'gridsearch'— Use grid search withNumGridDivisionsvalues per dimension.

  • 'randomsearch'— Search at random amongMaxObjectiveEvaluationspoints.

'gridsearch'searches in a random order, using uniform sampling without replacement from the grid. After optimization, you can get a table in grid order by using the commandsortrows(Mdl.HyperparameterOptimizationResults).

'bayesopt'
AcquisitionFunctionName

  • 'expected-improvement-per-second-plus'

  • 'expected-improvement'

  • 'expected-improvement-plus'

  • 'expected-improvement-per-second'

  • 'lower-confidence-bound'

  • 'probability-of-improvement'

Acquisition functions whose names includeper-seconddo not yield reproducible results because the optimization depends on the runtime of the objective function. Acquisition functions whose names includeplusmodify their behavior when they are overexploiting an area. For more details, seeAcquisition Function Types.

'expected-improvement-per-second-plus'
MaxObjectiveEvaluations Maximum number of objective function evaluations. 30for'bayesopt'and'randomsearch', and the entire grid for'gridsearch'
MaxTime

Time limit, specified as a positive real scalar. The time limit is in seconds, as measured byticandtoc. The run time can exceedMaxTimebecauseMaxTimedoes not interrupt function evaluations.

Inf
NumGridDivisions For'gridsearch', the number of values in each dimension. The value can be a vector of positive integers giving the number of values for each dimension, or a scalar that applies to all dimensions. This field is ignored for categorical variables. 10
ShowPlots Logical value indicating whether to show plots. Iftrue, this field plots the best observed objective function value against the iteration number. If you use Bayesian optimization (Optimizeris'bayesopt'), then this field also plots the best estimated objective function value. The best observed objective function values and best estimated objective function values correspond to the values in theBestSoFar (observed)andBestSoFar (estim.)columns of the iterative display, respectively. You can find these values in the propertiesObjectiveMinimumTraceandEstimatedObjectiveMinimumTraceofMdl.HyperparameterOptimizationResults. If the problem includes one or two optimization parameters for Bayesian optimization, thenShowPlots再次也块模型的目标函数st the parameters. true
SaveIntermediateResults Logical value indicating whether to save results whenOptimizeris'bayesopt'. Iftrue, this field overwrites a workspace variable named'BayesoptResults'at each iteration. The variable is aBayesianOptimizationobject. false
Verbose

Display at the command line:

  • 0— No iterative display

  • 1— Iterative display

  • 2— Iterative display with extra information

For details, see thebayesoptVerbosename-value argument and the exampleOptimize Classifier Fit Using Bayesian Optimization.

1
UseParallel Logical value indicating whether to run Bayesian optimization in parallel, which requires Parallel Computing Toolbox™. Due to the nonreproducibility of parallel timing, parallel Bayesian optimization does not necessarily yield reproducible results. For details, seeParallel Bayesian Optimization. false
Repartition

Logical value indicating whether to repartition the cross-validation at every iteration. If this field isfalse, the optimizer uses a single partition for the optimization.

The settingtrueusually gives the most robust results because it takes partitioning noise into account. However, for good results,truerequires at least twice as many function evaluations.

false
Use no more than one of the following three options.
CVPartition Acvpartitionobject, as created bycvpartition 'Kfold',5if you do not specify a cross-validation field
Holdout A scalar in the range(0,1)representing the holdout fraction
Kfold An integer greater than 1

Example:'HyperparameterOptimizationOptions',struct('MaxObjectiveEvaluations',60)

Data Types:struct

Output Arguments

collapse all

Trained kernel regression model, returned as aRegressionKernelmodel object orRegressionPartitionedKernelcross-validated model object.

If you set any of the name-value pair argumentsCrossVal,CVPartition,Holdout,KFold, orLeaveout, thenMdlis aRegressionPartitionedKernelcross-validated model. Otherwise,Mdlis aRegressionKernelmodel.

To reference properties ofMdl, use dot notation. For example, enterMdl.NumExpansionDimensionsin the Command Window to display the number of dimensions of the expanded space.

Note

Unlike other regression models, and for economical memory usage, aRegressionKernelmodel object does not store the training data or training process details (for example, convergence history).

Optimization details, returned as a structure array including fields described in this table. The fields contain final values or name-value pair argument specifications.

Field Description
Solver

Objective function minimization technique:'LBFGS-fast','LBFGS-blockwise', or'LBFGS-tall'. For details, seeAlgorithms.

LossFunction Loss function. Either mean squared error (MSE) or epsilon-insensitive, depending on the type of linear regression model. SeeLearner.
Lambda Regularization term strength. SeeLambda.
BetaTolerance Relative tolerance on the linear coefficients and the bias term. SeeBetaTolerance.
GradientTolerance Absolute gradient tolerance. SeeGradientTolerance.
ObjectiveValue Value of the objective function when optimization terminates. The regression loss plus the regularization term compose the objective function.
GradientMagnitude Infinite norm of the gradient vector of the objective function when optimization terminates. SeeGradientTolerance.
RelativeChangeInBeta Relative changes in the linear coefficients and the bias term when optimization terminates. SeeBetaTolerance.
FitTime Elapsed, wall-clock time (in seconds) required to fit the model to the data.
History History of optimization information. This field also includes the optimization information from trainingMdl. This field is empty ([])if you specify'Verbose',0. For details, seeVerboseandAlgorithms.

To access fields, use dot notation. For example, to access the vector of objective function values for each iteration, enterFitInfo.ObjectiveValuein the Command Window.

Examine the information provided byFitInfoto assess whether convergence is satisfactory.

Cross-validation optimization of hyperparameters, returned as aBayesianOptimizationobject or a table of hyperparameters and associated values. The output is nonempty when the value of'OptimizeHyperparameters'is not'none'. The output value depends on theOptimizerfield value of the'HyperparameterOptimizationOptions'name-value pair argument:

Value ofOptimizerField Value ofHyperparameterOptimizationResults
'bayesopt'(default) Object of classBayesianOptimization
'gridsearch'or'randomsearch' Table of hyperparameters used, observed objective function values (cross-validation loss), and rank of observations from lowest (best) to highest (worst)

Limitations

  • fitrkerneldoes not accept initial conditions for the linear coefficients beta (β)and bias term (b)used to determine the decision function, f ( x = T ( x β + b .

  • fitrkerneldoes not support standardization.

More About

collapse all

Random Feature Expansion

Random feature expansion, such as Random Kitchen Sinks[1]and Fastfood[2], is a scheme to approximate Gaussian kernels of the kernel regression algorithm for big data in a computationally efficient way. Random feature expansion is more practical for big data applications that have large training sets but can also be applied to smaller data sets that fit in memory.

The kernel regression algorithm searches for an optimal function that deviates from each response data point (yi)by values no greater than the epsilon margin (ε)after mapping the predictor data into a high-dimensional space.

Some regression problems cannot be described adequately using a linear model. In such cases, obtain a nonlinear regression model by replacing the dot productx1x2with a nonlinear kernel function G ( x 1 , x 2 = φ ( x 1 , φ ( x 2 , wherexiis theith observation (row vector) andφ(xiis a transformation that mapsxito a high-dimensional space (called the “kernel trick”). However, evaluatingG(x1,x2, the Gram matrix, for each pair of observations is computationally expensive for a large data set (largen).

The random feature expansion scheme finds a random transformation so that its dot product approximates the Gaussian kernel. That is,

G ( x 1 , x 2 = φ ( x 1 , φ ( x 2 T ( x 1 T ( x 2 ' ,

whereT(xmapsxin p to a high-dimensional space ( m ). The Random Kitchen Sink[1]scheme uses the random transformation

T ( x = m 1 / 2 exp ( i Z x ' ' ,

where Z m × p is a sample drawn from N ( 0 , σ 2 andσ2is a kernel scale. This scheme requiresO(mpcomputation and storage. The Fastfood[2]scheme introduces another random basisVinstead ofZusing Hadamard matrices combined with Gaussian scaling matrices. This random basis reduces computation cost toO(mlogpand reduces storage toO(m.

You can specify values formandσ2, using theNumExpansionDimensionsandKernelScalename-value pair arguments offitrkernel, respectively.

Thefitrkernelfunction uses the Fastfood scheme for random feature expansion and uses linear regression to train a Gaussian kernel regression model. Unlike solvers in thefitrsvmfunction, which require computation of then-by-nGram matrix, the solver infitrkernelonly needs to form a matrix of sizen-by-m, withmtypically much less thannfor big data.

Box Constraint

A box constraint is a parameter that controls the maximum penalty imposed on observations that lie outside the epsilon margin (ε), and helps to prevent overfitting (regularization). Increasing the box constraint can lead to longer training times.

The box constraint (C)and the regularization term strength (λ)are related byC= 1/(λn, wherenis the number of observations.

Algorithms

fitrkernelminimizes the regularized objective function using a Limited-memory Broyden-Fletcher-Goldfarb-Shanno (LBFGS) solver with ridge (L2)regularization. To find the type of LBFGS solver used for training, typeFitInfo.Solverin the Command Window.

  • 'LBFGS-fast'— LBFGS solver.

  • 'LBFGS-blockwise'— LBFGS solver with a block-wise strategy. Iffitrkernelrequires more memory than the value ofBlockSizeto hold the transformed predictor data, then it uses a block-wise strategy.

  • 'LBFGS-tall'— LBFGS solver with a block-wise strategy for tall arrays.

Whenfitrkerneluses a block-wise strategy,fitrkernelimplements LBFGS by distributing the calculation of the loss and gradient among different parts of the data at each iteration. Also,fitrkernelrefines the initial estimates of the linear coefficients and the bias term by fitting the model locally to parts of the data and combining the coefficients by averaging. If you specify'Verbose',1, thenfitrkerneldisplays diagnostic information for each data pass and stores the information in theHistoryfield ofFitInfo.

Whenfitrkerneldoes not use a block-wise strategy, the initial estimates are zeros. If you specify'Verbose',1, thenfitrkerneldisplays diagnostic information for each iteration and stores the information in theHistoryfield ofFitInfo.

References

[1] Rahimi, A., and B. Recht. “Random Features for Large-Scale Kernel Machines.”Advances in Neural Information Processing Systems. Vol. 20, 2008, pp. 1177–1184.

[2] Le, Q., T. Sarlós, and A. Smola. “Fastfood — Approximating Kernel Expansions in Loglinear Time.”Proceedings of the 30th International Conference on Machine Learning. Vol. 28, No. 3, 2013, pp. 244–252.

[3] Huang, P. S., H. Avron, T. N. Sainath, V. Sindhwani, and B. Ramabhadran. “Kernel methods match Deep Neural Networks on TIMIT.”2014 IEEE International Conference on Acoustics, Speech and Signal Processing. 2014, pp. 205–209.

Extended Capabilities

Introduced in R2018a