Main Content

fit

Train naive Bayes classification model for incremental learning

Since R2021a

Description

Thefitfunction fits a configured naive Bayes classification model for incremental learning (incrementalClassificationNaiveBayesobject) to streaming data. To additionally track performance metrics using the data as it arrives, useupdateMetricsAndFitinstead.

To fit or cross-validate a naive Bayes classification model to an entire batch of data at once, seefitcnb.

example

Mdl= fit(Mdl,X,Y)returns a naive Bayes classification model for incremental learningMdl, which represents the input naive Bayes classification model for incremental learningMdltrained using the predictor and response data,XandYrespectively. Specifically,fitupdates the conditional posterior distribution of the predictor variables given the data.

example

Mdl= fit(Mdl,X,Y,'Weights',Weights)also sets observation weightsWeights.

Examples

collapse all

当你kn适合增量朴素贝叶斯的学习者ow only the expected maximum number of classes in the data.

Create an incremental naive Bayes model. Specify that the maximum number of expected classes is 5.

Mdl = incrementalClassificationNaiveBayes('MaxNumClasses',5)
Mdl = incrementalClassificationNaiveBayes IsWarm: 0 Metrics: [1x2 table] ClassNames: [1x0 double] ScoreTransform: 'none' DistributionNames: 'normal' DistributionParameters: {} Properties, Methods

Mdlis anincrementalClassificationNaiveBayesmodel. All its properties are read-only.Mdlcan process at most 5 unique classes. By default, the prior class distributionMdl.Prioris empirical, which means the software updates the prior distribution as it encounters labels.

Mdlmust be fit to data before you can use it to perform any other operations.

Load the human activity data set. Randomly shuffle the data.

loadhumanactivityn = numel(actid); rng(1)% For reproducibilityidx = randsample(n,n); X = feat(idx,:); Y = actid(idx);

For details on the data set, enterDescriptionat the command line.

Fit the incremental model to the training data, in chunks of 50 observations at a time, by using thefitfunction. At each iteration:

  • Simulate a data stream by processing 50 observations.

  • Overwrite the previous incremental model with a new one fitted to the incoming observations.

  • Store the mean of the first predictor in the first class μ 11 and the prior probability that the subject is moving (Y> 2) to see how these parameters evolve during incremental learning.

% PreallocationnumObsPerChunk = 50; nchunk = floor(n/numObsPerChunk); mu11 = zeros(nchunk,1); priormoved = zeros(nchunk,1);% Incremental fittingforj = 1:nchunk ibegin = min(n,numObsPerChunk*(j-1) + 1); iend = min(n,numObsPerChunk*j); idx = ibegin:iend; Mdl = fit(Mdl,X(idx,:),Y(idx)); mu11(j) = Mdl.DistributionParameters{1,1}(1); priormoved(j) = sum(Mdl.Prior(Mdl.ClassNames > 2));end

Mdlis anincrementalClassificationNaiveBayesmodel object trained on all the data in the stream.

To see how the parameters evolve during incremental learning, plot them on separate tiles.

t = tiledlayout(2,1); nexttile plot(mu11) ylabel('\mu_{11}') xlabel('Iteration') axistightnexttile情节(priormoved) ylabel ('\pi(Subject Is Moving)') xlabel(t,'Iteration') axistight

图包含2轴对象。坐标轴对象1xlabel Iteration, ylabel \mu_{11} contains an object of type line. Axes object 2 with ylabel \pi(Subject Is Moving) contains an object of type line.

fit作表语用更新后的意思tor distribution as it processes each chunk. Because the prior class distribution is empirical, π (subject is moving) changes asfitprocesses each chunk.

当你kn适合增量朴素贝叶斯的学习者ow all the class names in the data.

Consider training a device to predict whether a subject is sitting, standing, walking, running, or dancing based on biometric data measured on the subject. The class names map 1 through 5 to an activity. Also, suppose that the researchers plan to expose the device to each class uniformly.

Create an incremental naive Bayes model for multiclass learning. Specify the class names and the uniform prior class distribution.

classnames = 1:5; Mdl = incrementalClassificationNaiveBayes('ClassNames',classnames,'Prior','uniform')
Mdl = incrementalClassificationNaiveBayes IsWarm: 0 Metrics: [1x2 table] ClassNames: [1 2 3 4 5] ScoreTransform: 'none' DistributionNames: 'normal' DistributionParameters: {5x0 cell} Properties, Methods

Mdlis anincrementalClassificationNaiveBayesmodel object. All its properties are read-only. During training, observed labels must be inMdl.ClassNames.

Mdlmust be fit to data before you can use it to perform any other operations.

Load the human activity data set. Randomly shuffle the data.

loadhumanactivityn = numel(actid); rng(1);% For reproducibilityidx = randsample(n,n); X = feat(idx,:); Y = actid(idx);

For details on the data set, enterDescriptionat the command line.

Fit the incremental model to the training data by using thefitfunction. Simulate a data stream by processing chunks of 50 observations at a time. At each iteration:

  • Process 50 observations.

  • Overwrite the previous incremental model with a new one fitted to the incoming observations.

  • Store the mean of the first predictor in the first class μ 11 and the prior probability that the subject is moving (Y> 2) to see how these parameters evolve during incremental learning.

% PreallocationnumObsPerChunk = 50; nchunk = floor(n/numObsPerChunk); mu11 = zeros(nchunk,1); priormoved = zeros(nchunk,1);% Incremental fittingforj = 1:nchunk ibegin = min(n,numObsPerChunk*(j-1) + 1); iend = min(n,numObsPerChunk*j); idx = ibegin:iend; Mdl = fit(Mdl,X(idx,:),Y(idx)); mu11(j) = Mdl.DistributionParameters{1,1}(1); priormoved(j) = sum(Mdl.Prior(Mdl.ClassNames > 2));end

Mdlis anincrementalClassificationNaiveBayesmodel object trained on all the data in the stream.

To see how the parameters evolve during incremental learning, plot them on separate tiles.

t = tiledlayout(2,1); nexttile plot(mu11) ylabel('\mu_{11}') xlabel('Iteration') axistightnexttile情节(priormoved) ylabel ('\pi(Subject Is Moving)') xlabel(t,'Iteration') axistight

图包含2轴对象。坐标轴对象1xlabel Iteration, ylabel \mu_{11} contains an object of type line. Axes object 2 with ylabel \pi(Subject Is Moving) contains an object of type line.

fit作表语用更新后的意思tor distribution as it processes each chunk. Because the prior class distribution is specified as uniform, π (subject is moving) = 0.6 and does not change asfitprocesses each chunk.

Train a naive Bayes classification model by usingfitcnb, convert it to an incremental learner, track its performance on streaming data, and then fit the model to the data. Specify observation weights.

Load and Preprocess Data

Load the human activity data set. Randomly shuffle the data.

loadhumanactivityrng(1);% For reproducibilityn = numel(actid); idx = randsample(n,n); X = feat(idx,:); Y = actid(idx);

For details on the data set, enterDescriptionat the command line.

Suppose that the data from a stationary subject (Y<= 2) has double the quality of the data from a moving subject. Create a weight variable that assigns a weight of 2 to observations from a stationary subject and 1 to a moving subject.

W = ones(n,1) + (Y <=2);

Train Naive Bayes Classification Model

Fit a naive Bayes classification model to a random sample of half the data.

idxtt = randsample([true false],n,true); TTMdl = fitcnb(X(idxtt,:),Y(idxtt),“重量”,W(idxtt))
TTMdl = ClassificationNaiveBayes ResponseName: 'Y' CategoricalPredictors: [] ClassNames: [1 2 3 4 5] ScoreTransform: 'none' NumObservations: 12053 DistributionNames: {1x60 cell} DistributionParameters: {5x60 cell} Properties, Methods

TTMdlis aClassificationNaiveBayesmodel object representing a traditionally trained naive Bayes classification model.

Convert Trained Model

Convert the traditionally trained model to a naive Bayes classification model for incremental learning.

IncrementalMdl = incrementalLearner(TTMdl)
IncrementalMdl = incrementalClassificationNaiveBayes IsWarm: 1 Metrics: [1x2 table] ClassNames: [1 2 3 4 5] ScoreTransform: 'none' DistributionNames: {1x60 cell} DistributionParameters: {5x60 cell} Properties, Methods

IncrementalMdlis anincrementalClassificationNaiveBayesmodel. Because class names are specified inIncrementalMdl.ClassNames, labels encountered during incremental learning must be inIncrementalMdl.ClassNames.

Separately Track Performance Metrics and Fit Model

Perform incremental learning on the rest of the data by using theupdateMetricsandfitfunctions. At each iteration:

  1. Simulate a data stream by processing 50 observations at a time.

  2. CallupdateMetricsto update the cumulative and window minimal cost of the model given the incoming chunk of observations. Overwrite the previous incremental model to update the losses in theMetricsproperty. Note that the function does not fit the model to the chunk of data—the chunk is "new" data for the model. Specify the observation weights.

  3. Store the minimal cost.

  4. Callfitto fit the model to the incoming chunk of observations. Overwrite the previous incremental model to update the model parameters. Specify the observation weights.

% Preallocationidxil = ~idxtt; nil = sum(idxil); numObsPerChunk = 50; nchunk = floor(nil/numObsPerChunk); mc = array2table(zeros(nchunk,2),'VariableNames',["Cumulative""Window"]); Xil = X(idxil,:); Yil = Y(idxil); Wil = W(idxil);% Incremental fittingforj = 1:nchunk ibegin = min(nil,numObsPerChunk*(j-1) + 1); iend = min(nil,numObsPerChunk*j); idx = ibegin:iend; IncrementalMdl = updateMetrics(IncrementalMdl,Xil(idx,:),Yil(idx),...“重量”,Wil(idx)); mc{j,:} = IncrementalMdl.Metrics{"MinimalCost",:}; IncrementalMdl = fit(IncrementalMdl,Xil(idx,:),Yil(idx),“重量”,Wil(idx));end

IncrementalMdlis anincrementalClassificationNaiveBayesmodel object trained on all the data in the stream.

Alternatively, you can useupdateMetricsAndFitto update performance metrics of the model given a new chunk of data, and then fit the model to the data.

Plot a trace plot of the performance metrics.

h = plot(mc.Variables); xlim([0 nchunk]) ylabel('Minimal Cost') legend(h,mc.Properties.VariableNames) xlabel('Iteration')

Figure contains an axes object. The axes object with xlabel Iteration, ylabel Minimal Cost contains 2 objects of type line. These objects represent Cumulative, Window.

The cumulative loss gradually stabilizes, whereas the window loss jumps throughout the training.

Incrementally train a naive Bayes classification model only when its performance degrades.

Load the human activity data set. Randomly shuffle the data.

loadhumanactivityn = numel(actid); rng(1)% For reproducibilityidx = randsample(n,n); X = feat(idx,:); Y = actid(idx);

For details on the data set, enterDescriptionat the command line.

Configure a naive Bayes classification model for incremental learning so that the maximum number of expected classes is 5, the tracked performance metric includes the misclassification error rate, and the metrics window size is 1000. Fit the configured model to the first 1000 observations.

Mdl = incrementalClassificationNaiveBayes('MaxNumClasses',5,'MetricsWindowSize',1000,...'Metrics','classiferror'); initobs = 1000; Mdl = fit(Mdl,X(1:initobs,:),Y(1:initobs));

Mdlis anincrementalClassificationNaiveBayesmodel object.

Perform incremental learning, with conditional fitting, by following this procedure for each iteration:

  • Simulate a data stream by processing a chunk of 100 observations at a time.

  • Update the model performance on the incoming chunk of data.

  • Fit the model to the chunk of data only when the misclassification error rate is greater than 0.05.

  • When tracking performance and fitting, overwrite the previous incremental model.

  • Store the misclassification error rate and the mean of the first predictor in the second class μ 21 to see how they evolve during training.

  • Track whenfittrains the model.

% PreallocationnumObsPerChunk = 100; nchunk = floor((n - initobs)/numObsPerChunk); mu21 = zeros(nchunk,1); ce = array2table(nan(nchunk,2),'VariableNames',["Cumulative""Window"]); trained = false(nchunk,1);% Incremental fittingforj = 1:nchunk ibegin = min(n,numObsPerChunk*(j-1) + 1 + initobs); iend = min(n,numObsPerChunk*j + initobs); idx = ibegin:iend; Mdl = updateMetrics(Mdl,X(idx,:),Y(idx)); ce{j,:} = Mdl.Metrics{"ClassificationError",:};ifce{j,2} > 0.05 Mdl = fit(Mdl,X(idx,:),Y(idx)); trained(j) = true;endmu21(j) = Mdl.DistributionParameters{2,1}(1);end

Mdlis anincrementalClassificationNaiveBayesmodel object trained on all the data in the stream.

To see how the model performance and μ 21 evolve during training, plot them on separate tiles.

t = tiledlayout(2,1); nexttile plot(mu21) holdonplot(find(trained),mu21(trained),'r.') xlim([0 nchunk]) ylabel('\mu_{21}') legend('\mu_{21}','Training occurs','Location','best') holdoffnexttile plot(ce.Variables) xlim([0 nchunk]) ylabel('Misclassification Error Rate') legend(ce.Properties.VariableNames,'Location','best') xlabel(t,'Iteration')

图包含2轴对象。坐标轴对象1ylabel \mu_{21} contains 2 objects of type line. One or more of the lines displays its values using only markers These objects represent \mu_{21}, Training occurs. Axes object 2 with ylabel Misclassification Error Rate contains 2 objects of type line. These objects represent Cumulative, Window.

The trace plot of μ 21 shows periods of constant values, during which the loss within the previous observation window is at most 0.05.

Input Arguments

collapse all

Naive Bayes classification model for incremental learning to fit to streaming data, specified as anincrementalClassificationNaiveBayesmodel object. You can createMdldirectly or by converting a supported, traditionally trained machine learning model using theincrementalLearnerfunction. For more details, see the corresponding reference page.

Chunk of predictor data to which the model is fit, specified as ann-by-Mdl.NumPredictorsfloating-point matrix.

The length of the observation labelsYand the number of observations inXmust be equal;Y(j)is the label of observationj(row) inX.

Note

IfMdl.NumPredictors= 0,fitinfers the number of predictors fromX, and sets the corresponding property of the output model. Otherwise, if the number of predictor variables in the streaming data changes fromMdl.NumPredictors,fitissues an error.

Data Types:single|double

Chunk of labels to which the model is fit, specified as a categorical, character, or string array, logical or floating-point vector, or cell array of character vectors.

The length of the observation labelsYand the number of observations inXmust be equal;Y(j)is the label of observationj(row) inX.

fitissues an error when one or both of these conditions are met:

  • Ycontains a new label and the maximum number of classes has already been reached (see theMaxNumClassesandClassNamesarguments ofincrementalClassificationNaiveBayes).

  • TheClassNamesproperty of the input modelMdlis nonempty, and the data types ofYandMdl.ClassNamesare different.

Data Types:char|string|cell|categorical|logical|single|double

Chunk of observation weights, specified as a floating-point vector of positive values.fitweighs the observations inXwith the corresponding values inWeights. The size ofWeightsmust equaln, the number of observations inX.

By default,Weightsisones(n,1).

For more details, including normalization schemes, seeObservation Weights.

Data Types:double|single

Note

If an observation (predictor or label) or weight contains at least one missing (NaN) value,fitignores the observation. Consequently,fituses fewer thannobservations to create an updated model, wherenis the number of observations inX.

Output Arguments

collapse all

Updated naive Bayes classification model for incremental learning, returned as an incremental learning model object of the same data type as the input modelMdl, anincrementalClassificationNaiveBayesobject.

In addition to updating distribution model parameters,fitperforms the following actions whenYcontains expected, but unprocessed, classes:

  • If you do not specify all expected classes by using theClassNamesname-value argument when you create the input modelMdlusingincrementalClassificationNaiveBayes,fit:

    1. Appends any new labels inYto the tail ofMdl.ClassNames.

    2. ExpandsMdl.Costto ac-by-cmatrix, wherecis the number of classes inMdl.ClassNames. The resulting misclassification cost matrix is balanced.

    3. ExpandsMdl.Priorto a lengthcvector of an updated empirical class distribution.

  • If you specify all expected classes when you create the input modelMdlor convert a traditionally trained naive Bayes model usingincrementalLearner, but you do not specify a misclassification cost matrix (Mdl.Cost),fitsets misclassification costs of processed classes to1and unprocessed classes toNaN. For example, iffitprocesses the first two classes of a possible three classes,Mdl.Costis[0 1 NaN; 1 0 NaN; 1 1 0].

More About

collapse all

Bag-of-Tokens Model

In the bag-of-tokens model, the value of predictorjis the nonnegative number of occurrences of tokenjin the observation. The number of categories (bins) in the multinomial model is the number of distinct tokens (number of predictors).

Tips

  • Unlike traditional training, incremental learning might not have a separate test (holdout) set. Therefore, to treat each incoming chunk of data as a test set, pass the incremental model and each incoming chunk toupdateMetricsbefore training the model on the same data.

Algorithms

collapse all

Normal Distribution Estimators

If predictor variablejhas a conditional normal distribution (see theDistributionNamesproperty), the software fits the distribution to the data by computing the class-specific weighted mean and the biased (maximum likelihood) estimate of the weighted standard deviation. For each classk:

  • The weighted mean of predictorjis

    x ¯ j | k = { i : y i = k } w i x i j { i : y i = k } w i ,

    wherewiis the weight for observationi. The software normalizes weights within a class such that they sum to the prior probability for that class.

  • The unbiased estimator of the weighted standard deviation of predictorjis

    s j | k = [ { i : y i = k } w i ( x i j x ¯ j | k ) 2 { i : y i = k } w i ] 1 / 2 .

Estimated Probability for Multinomial Distribution

If all predictor variables compose a conditional multinomial distribution (see theDistributionNamesproperty), the software fits the distribution using theBag-of-Tokens Model. The software stores the probability that tokenjappears in classkin the propertyDistributionParameters{k,j}. With additive smoothing[1], the estimated probability is

P ( token j | class k ) = 1 + c j | k P + c k ,

where:

  • c j | k = n k { i : y i = k } x i j w i { i : y i = k } w i , which is the weighted number of occurrences of tokenjin classk.

  • nkis the number of observations in classk.

  • w i is the weight for observationi. The software normalizes weights within a class so that they sum to the prior probability for that class.

  • c k = j = 1 P c j | k , which is the total weighted number of occurrences of all tokens in classk.

Estimated Probability for Multivariate Multinomial Distribution

If predictor variablejhas a conditional multivariate multinomial distribution (see theDistributionNamesproperty), the software follows this procedure:

  1. The software collects a list of the unique levels, stores the sorted list inCategoricalLevels, and considers each level a bin. Each combination of predictor and class is a separate, independent multinomial random variable.

  2. For each classk, the software counts instances of each categorical level using the list stored inCategoricalLevels{j}.

  3. The software stores the probability that predictorjin classkhas levelLin the propertyDistributionParameters{k,j}, for all levels inCategoricalLevels{j}. With additive smoothing[1], the estimated probability is

    P ( predictor j = L | class k ) = 1 + m j | k ( L ) m j + m k ,

    where:

    • m j | k ( L ) = n k { i : y i = k } I { x i j = L } w i { i : y i = k } w i , 观察的加权数w是哪一个hich predictorjequalsLin classk.

    • nkis the number of observations in classk.

    • I { x i j = L } = 1 ifxij=L, and 0 otherwise.

    • w i is the weight for observationi. The software normalizes weights within a class so that they sum to the prior probability for that class.

    • mjis the number of distinct levels in predictorj.

    • mkis the weighted number of observations in classk.

Observation Weights

For each conditional predictor distribution,fitcomputes the weighted average and standard deviation.

If the prior class probability distribution is known (in other words, the prior distribution is not empirical),fitnormalizes observation weights to sum to the prior class probabilities in the respective classes. This action implies that the default observation weights are the respective prior class probabilities.

If the prior class probability distribution is empirical, the software normalizes the specified observation weights to sum to 1 each time you callfit.

References

[1] Manning, Christopher D., Prabhakar Raghavan, and Hinrich Schütze.Introduction to Information Retrieval, NY: Cambridge University Press, 2008.

Version History

Introduced in R2021a

expand all