Main Content

predictorImportance

Estimates of predictor importance for regression tree

Syntax

imp = predictorImportance(tree)

Description

imp= predictorImportance(tree)computes estimates of predictor importance fortreeby summing changes in the mean squared error due to splits on every predictor and dividing the sum by the number of branch nodes.

Input Arguments

tree

A regression tree created byfitrtree, or by thecompactmethod.

Output Arguments

imp

A row vector with the same number of elements as the number of predictors (columns) intree.X. The entries are the estimates of predictor importance, with0representing the smallest possible importance.

Examples

expand all

Estimate the predictor importance for all predictor variables in the data.

Load thecarsmalldata set.

loadcarsmall

Grow a regression tree forMPGusingAcceleration,Cylinders,Displacement,Horsepower,Model_Year, andWeightas predictors.

X = [Acceleration Cylinders Displacement Horsepower Model_Year Weight]; tree = fitrtree(X,MPG);

Estimate the predictor importance for all predictor variables.

imp = predictorImportance(tree)
imp =1×60.0647 0.1068 0.1155 0.1411 0.3348 2.6565

Weight, the last predictor, has the most impact on mileage. The predictor with the minimal impact on making predictions is the first variable, which isAcceleration.

Estimate the predictor importance for all variables in the data and where the regression tree contains surrogate splits.

Load thecarsmalldata set.

loadcarsmall

Grow a regression tree forMPGusingAcceleration,Cylinders,Displacement,Horsepower,Model_Year, andWeightas predictors. Specify to identify surrogate splits.

X = [Acceleration Cylinders Displacement Horsepower Model_Year Weight]; tree = fitrtree(X,MPG,'Surrogate','on');

Estimate the predictor importance for all predictor variables.

imp = predictorImportance(tree)
imp =1×61.0449 2.4560 2.5570 2.5788 2.0832 2.8938

Comparingimpto the results inEstimate Predictor Importance,Weightstill has the most impact on mileage, butCylindersis the fourth most important predictor.

Load thecarsmalldata set. Consider a model that predicts the mean fuel economy of a car given its acceleration, number of cylinders, engine displacement, horsepower, manufacturer, model year, and weight. ConsiderCylinders,Mfg, andModel_Yearas categorical variables.

loadcarsmallCylinders = categorical(Cylinders); Mfg = categorical(cellstr(Mfg)); Model_Year = categorical(Model_Year); X = table(Acceleration,Cylinders,Displacement,Horsepower,Mfg,...Model_Year,Weight,MPG);

Display the number of categories represented in the categorical variables.

numCylinders = numel(categories(Cylinders))
numCylinders = 3
numMfg = numel(categories(Mfg))
numMfg = 28
numModelYear = numel(categories(Model_Year))
numModelYear = 3

Because there are 3 categories only inCylindersandModel_Year, the standard CART, predictor-splitting algorithm prefers splitting a continuous predictor over these two variables.

Train a regression tree using the entire data set. To grow unbiased trees, specify usage of the curvature test for splitting predictors. Because there are missing values in the data, specify usage of surrogate splits.

Mdl = fitrtree(X,“英里”,'PredictorSelection','curvature','Surrogate','on');

Estimate predictor importance values by summing changes in the risk due to splits on every predictor and dividing the sum by the number of branch nodes. Compare the estimates using a bar graph.

imp = predictorImportance(Mdl); figure; bar(imp); title('Predictor Importance Estimates'); ylabel('Estimates'); xlabel('Predictors'); h = gca; h.XTickLabel = Mdl.PredictorNames; h.XTickLabelRotation = 45; h.TickLabelInterpreter ='none';

Figure contains an axes object. The axes object with title Predictor Importance Estimates contains an object of type bar.

In this case,Displacementis the most important predictor, followed byHorsepower.

More About

expand all

Extended Capabilities