Main Content

batchNormalizationLayer

Batch normalization layer

Description

A batch normalization layer normalizes a mini-batch of data across all observations for each channel independently. To speed up training of the convolutional neural network and reduce the sensitivity to network initialization, use batch normalization layers between convolutional layers and nonlinearities, such as ReLU layers.

After normalization, the layer scales the input with a learnable scale factorγand shifts it by a learnable offsetβ.

Creation

Description

layer= batchNormalizationLayercreates a batch normalization layer.

example

layer= batchNormalizationLayer(Name,Value)creates a batch normalization layer and sets the optionalTrainedMean,TrainedVariance,Epsilon,Parameters and Initialization,Learning Rate and Regularization, andNameproperties using one or more name-value pairs. For example,batchNormalizationLayer('Name','batchnorm')creates a batch normalization layer with the name'batchnorm'.

Properties

expand all

Batch Normalization

Mean statistic used for prediction, specified as one of the following:

  • For 2-D image input, a numeric array of size 1-by-1-by-NumChannels

  • For 3-D image input, a numeric array of size 1-by-1-by-1-by-NumChannels

  • For feature or sequence input, a numeric array of sizeNumChannels-by-1

如果the'BatchNormalizationStatistics'training option is'moving', then the software approximates the batch normalization statistics during training using a running estimate and, after training, sets theTrainedMeanandTrainedVarianceproperties to the latest values of the moving estimates of the mean and variance, respectively.

如果the'BatchNormalizationStatistics'training option is'population', then after network training finishes, the software passes through the data once more and sets theTrainedMeanandTrainedVarianceproperties to the mean and variance computed from the entire training data set, respectively.

The layer usesTrainedMeanandTrainedVarianceto normalize the input during prediction.

Variance statistic used for prediction, specified as one of the following:

  • For 2-D image input, a numeric array of size 1-by-1-by-NumChannels

  • For 3-D image input, a numeric array of size 1-by-1-by-1-by-NumChannels

  • For feature or sequence input, a numeric array of sizeNumChannels-by-1

如果the'BatchNormalizationStatistics'training option is'moving', then the software approximates the batch normalization statistics during training using a running estimate and, after training, sets theTrainedMeanandTrainedVarianceproperties to the latest values of the moving estimates of the mean and variance, respectively.

如果the'BatchNormalizationStatistics'training option is'population', then after network training finishes, the software passes through the data once more and sets theTrainedMeanandTrainedVarianceproperties to the mean and variance computed from the entire training data set, respectively.

The layer usesTrainedMeanandTrainedVarianceto normalize the input during prediction.

Constant to add to the mini-batch variances, specified as a numeric scalar equal to or larger than1e-5.

The layer adds this constant to the mini-batch variances before normalization to ensure numerical stability and avoid division by zero.

Number of input channels, specified as'auto'or a positive integer.

This property is always equal to the number of channels of the input to the layer. IfNumChannelsis'auto',那么软件自动决定了correct value for the number of channels at training time.

Parameters and Initialization

Function to initialize the channel scale factors, specified as one of the following:

  • 'ones'– Initialize the channel scale factors with ones.

  • 'zeros'– Initialize the channel scale factors with zeros.

  • 'narrow-normal'– Initialize the channel scale factors by independently sampling from a normal distribution with a mean of zero and standard deviation of 0.01.

  • Function handle – Initialize the channel scale factors with a custom function. If you specify a function handle, then the function must be of the formscale = func(sz), whereszis the size of the scale. For an example, seeSpecify Custom Weight Initialization Function.

The layer only initializes the channel scale factors when theScaleproperty is empty.

Data Types:char|string|function_handle

Function to initialize the channel offsets, specified as one of the following:

  • 'zeros'– Initialize the channel offsets with zeros.

  • 'ones'– Initialize the channel offsets with ones.

  • 'narrow-normal'– Initialize the channel offsets by independently sampling from a normal distribution with a mean of zero and standard deviation of 0.01.

  • Function handle – Initialize the channel offsets with a custom function. If you specify a function handle, then the function must be of the formoffset = func(sz), whereszis the size of the scale. For an example, seeSpecify Custom Weight Initialization Function.

The layer only initializes the channel offsets when theOffsetproperty is empty.

Data Types:char|string|function_handle

Channel scale factorsγ, specified as a numeric array.

The channel scale factors are learnable parameters. When you train a network, ifScaleis nonempty, thentrainNetworkuses theScale属性的初始值。如果Scaleis empty, thentrainNetworkuses the initializer specified byScaleInitializer.

At training time,Scaleis one of the following:

  • For 2-D image input, a numeric array of size 1-by-1-by-NumChannels

  • For 3-D image input, a numeric array of size 1-by-1-by-1-by-NumChannels

  • For feature or sequence input, a numeric array of sizeNumChannels-by-1

Channel offsetsβ, specified as a numeric array.

The channel offsets are learnable parameters. When you train a network, ifOffsetis nonempty, thentrainNetworkuses theOffset属性的初始值。如果Offsetis empty, thentrainNetworkuses the initializer specified byOffsetInitializer.

At training time,Offsetis one of the following:

  • For 2-D image input, a numeric array of size 1-by-1-by-NumChannels

  • For 3-D image input, a numeric array of size 1-by-1-by-1-by-NumChannels

  • For feature or sequence input, a numeric array of sizeNumChannels-by-1

Decay value for the moving mean computation, specified as a numeric scalar between0and1.

When the'BatchNormalizationStatistics'training option is'moving', at each iteration, the layer updates the moving mean value using

μ * = λ μ μ ^ + ( 1 λ μ ) μ ,

where μ * denotes the updated mean, λ μ denotes the mean decay value, μ ^ denotes the mean of the layer input, and μ denotes the latest value of the moving mean value.

如果the'BatchNormalizationStatistics'training option is'population', then this option has no effect.

Data Types:single|double

Decay value for the moving variance computation, specified as a numeric scalar between0and1.

When the'BatchNormalizationStatistics'training option is'moving', at each iteration, the layer updates the moving variance value using

σ 2 * = λ σ 2 σ 2 ^ + ( 1 λ σ 2 ) σ 2 ,

where σ 2 * denotes the updated variance, λ σ 2 denotes the variance decay value, σ 2 ^ denotes the variance of the layer input, and σ 2 denotes the latest value of the moving variance value.

如果the'BatchNormalizationStatistics'training option is'population', then this option has no effect.

Data Types:single|double

Learning Rate and Regularization

Learning rate factor for the scale factors, specified as a nonnegative scalar.

The software multiplies this factor by the global learning rate to determine the learning rate for the scale factors in a layer. For example, ifScaleLearnRateFactoris2, then the learning rate for the scale factors in the layer is twice the current global learning rate. The software determines the global learning rate based on the settings specified with thetrainingOptionsfunction.

Learning rate factor for the offsets, specified as a nonnegative scalar.

The software multiplies this factor by the global learning rate to determine the learning rate for the offsets in a layer. For example, ifOffsetLearnRateFactoris2, then the learning rate for the offsets in the layer is twice the current global learning rate. The software determines the global learning rate based on the settings specified with thetrainingOptionsfunction.

L2regularization factor for the scale factors, specified as a nonnegative scalar.

The software multiplies this factor by the global L2regularization factor to determine the learning rate for the scale factors in a layer. For example, ifScaleL2Factoris2, then the L2regularization for the offsets in the layer is twice the global L2regularization factor. You can specify the global L2regularization factor using thetrainingOptionsfunction.

L2regularization factor for the offsets, specified as a nonnegative scalar.

The software multiplies this factor by the global L2regularization factor to determine the learning rate for the offsets in a layer. For example, ifOffsetL2Factoris2, then the L2regularization for the offsets in the layer is twice the global L2regularization factor. You can specify the global L2regularization factor using thetrainingOptionsfunction.

Layer

Layer name, specified as a character vector or a string scalar. ForLayerarray input, thetrainNetwork,assembleNetwork,layerGraph, anddlnetworkfunctions automatically assign names to layers withNameset to''.

Data Types:char|string

This property is read-only.

Number of inputs of the layer. This layer accepts a single input only.

Data Types:double

This property is read-only.

输入名称(t)he layer. This layer accepts a single input only.

Data Types:cell

This property is read-only.

Number of outputs of the layer. This layer has a single output only.

Data Types:double

This property is read-only.

Output names of the layer. This layer has a single output only.

Data Types:cell

Examples

collapse all

Create a batch normalization layer with the name'BN1'.

layer = batchNormalizationLayer('Name','BN1')
layer = BatchNormalizationLayer with properties: Name: 'BN1' NumChannels: 'auto' TrainedMean: [] TrainedVariance: [] Hyperparameters MeanDecay: 0.1000 VarianceDecay: 0.1000 Epsilon: 1.0000e-05 Learnable Parameters Offset: [] Scale: [] Show all properties

Include batch normalization layers in aLayerarray.

layers = [ imageInputLayer([32 32 3]) convolution2dLayer(3,16,'Padding',1) batchNormalizationLayer reluLayer maxPooling2dLayer(2,'Stride',2) convolution2dLayer(3,32,'Padding',1) batchNormalizationLayer reluLayer fullyConnectedLayer(10) softmaxLayer classificationLayer ]
layers = 11x1 Layer array with layers: 1 '' Image Input 32x32x3 images with 'zerocenter' normalization 2 '' Convolution 16 3x3 convolutions with stride [1 1] and padding [1 1 1 1] 3 '' Batch Normalization Batch normalization 4 '' ReLU ReLU 5 '' Max Pooling 2x2 max pooling with stride [2 2] and padding [0 0 0 0] 6 '' Convolution 32 3x3 convolutions with stride [1 1] and padding [1 1 1 1] 7 '' Batch Normalization Batch normalization 8 '' ReLU ReLU 9 '' Fully Connected 10 fully connected layer 10 '' Softmax softmax 11 '' Classification Output crossentropyex

More About

expand all

Algorithms

The batch normalization operation normalizes the elementsxiof the input by first calculating the meanμBand varianceσB2在空间、时间和观察维度for each channel independently. Then, it calculates the normalized activations as

x i ^ = x i μ B σ B 2 + ϵ ,

whereϵis a constant that improves numerical stability when the variance is very small.

To allow for the possibility that inputs with zero mean and unit variance are not optimal for the operations that follow batch normalization, the batch normalization operation further shifts and scales the activations using the transformation

y i = γ x ^ i + β ,

where the offsetβand scale factorγare learnable parameters that are updated during network training.

To make predictions with the network after training, batch normalization requires a fixed mean and variance to normalize the data. This fixed mean and variance can be calculated from the training data after training, or approximated during training using running statistic computations.

如果the'BatchNormalizationStatistics'training option is'moving', then the software approximates the batch normalization statistics during training using a running estimate and, after training, sets theTrainedMeanandTrainedVarianceproperties to the latest values of the moving estimates of the mean and variance, respectively.

如果the'BatchNormalizationStatistics'training option is'population', then after network training finishes, the software passes through the data once more and sets theTrainedMeanandTrainedVarianceproperties to the mean and variance computed from the entire training data set, respectively.

The layer usesTrainedMeanandTrainedVarianceto normalize the input during prediction.

References

[1] Ioffe, Sergey, and Christian Szegedy. “Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift.” Preprint, submitted March 2, 2015. https://arxiv.org/abs/1502.03167.

Extended Capabilities

C/C++ Code Generation
Generate C and C++ code using MATLAB® Coder™.

GPU Code Generation
Generate CUDA® code for NVIDIA® GPUs using GPU Coder™.

Introduced in R2017b