Main Content

lstmProjectedLayer

Long short-term memory (LSTM) projected layer for recurrent neural network (RNN)

Since R2022b

    Description

    An LSTM projected layer is an RNN layer that learns long-term dependencies between time steps in time series and sequence data using projected learnable weights.

    To compress a deep learning network, you can useprojected layers. A projected layer is a type of deep learning layer that enables compression by reducing the number of stored learnable parameters. The layer introduces learnable projector matricesQ, replaces multiplications of the form W x , whereWis a learnable matrix, with the multiplication W Q Q x , and storesQand W = W Q instead of storingW. Projectingxinto a lower-dimensional space usingQtypically requires less memory to store the learnable parameters and can have similarly strong prediction accuracy.

    Reducing the number of learnable parameters by projecting an LSTM layer rather than reducing the number of hidden units of the LSTM layer maintains the output size of the layer and, in turn, the sizes of the downstream layers, which can result in better prediction accuracy.

    Creation

    Description

    example

    layer= lstmProjectedLayer(numHiddenUnits,outputProjectorSize,inputProjectorSize)creates an LSTM projected layer and sets theNumHiddenUnits,OutputProjectorSize, andInputProjectorSizeproperties.

    example

    layer= lstmProjectedLayer(___,Name=Value)sets theOutputMode,HasStateInputs,HasStateOutputs,Activations,State,Parameters and Initialization,Learning Rate and Regularization, andNameproperties using one or more name-value arguments.

    Properties

    expand all

    Projected LSTM

    This property is read-only.

    Number of hidden units (also known as the hidden size), specified as a positive integer.

    对应于amou隐藏单位的数量nt of information that the layer remembers between time steps (the hidden state). The hidden state can contain information from all the previous time steps, regardless of the sequence length. If the number of hidden units is too large, then the layer might overfit to the training data.

    The hidden state does not limit the number of time steps that the layer processes in an iteration. To split your sequences into smaller sequences for when you use thetrainNetworkfunction, use theSequenceLengthtraining option.

    The layer outputs data withNumHiddenUnitschannels.

    Data Types:single|double|int8|int16|int32|int64|uint8|uint16|uint32|uint64

    This property is read-only.

    Output projector size, specified as a positive integer.

    The LSTM layer operation uses four matrix multiplications of the form R h t 1 , whereRdenotes the recurrent weights andhtdenotes the hidden state (or, equivalently, the layer output) at time stept.

    The LSTM projected layer operation instead uses multiplications of the from R Q o Q o h t 1 , whereQois aNumHiddenUnits-by-OutputProjectorSizematrix known as theoutput projector. The layer uses the same projectorQofor each of the four multiplications.

    To perform the four multiplications of the form R h t 1 , an LSTM layer stores four recurrent weightsR, which necessitates storing4*NumHiddenUnits^2learnable parameters. By instead storing the4*NumHiddenUnits-by-OutputProjectorSizematrix R = R Q o andQo, an LSTM projected layer can perform the multiplication R Q o Q o h t 1 and store only5*NumHiddenUnits*OutputProjectorSizelearnable parameters.

    Tip

    To ensure that R Q o Q o h t 1 requires fewer learnable parameters, set theOutputProjectorSizeproperty to a value less than(4/5)*NumHiddenUnits.

    Data Types:single|double|int8|int16|int32|int64|uint8|uint16|uint32|uint64

    This property is read-only.

    Input projector size, specified as a positive integer.

    The LSTM layer operation uses four matrix multiplications of the form W x t , whereWdenotes the input weights andxtdenotes the layer input at time stept.

    The LSTM projected layer operation instead uses multiplications of the from W Q i Q i x t , whereQiis anInputSize-by-InputProjectorSizematrix known as theinput projector. The layer uses the same projectorQifor each of the four multiplications.

    To perform the four multiplications of the form W x t , an LSTM layer stores four weight matricesW, which necessitates storing4*NumHiddenUnits*InputSizelearnable parameters. By instead storing the4*NumHiddenUnits-by-InputProjectorSizematrix W = W Q i andQi, an LSTM projected layer can perform the multiplication W Q i Q i x t and store only(4*NumHiddenUnits+InputSize)*InputProjectorSizelearnable parameters.

    Tip

    To ensure that W Q i Q i x t requires fewer learnable parameters, set theInputProjectorSizeproperty to a value less than(4*numHiddenUnits*inputSize)/(4*numHiddenUnits+inputSize).

    Data Types:single|double|int8|int16|int32|int64|uint8|uint16|uint32|uint64

    This property is read-only.

    Output mode, specified as one of these values:

    • 'sequence'— Output the complete sequence.

    • 'last'— Output the last time step of the sequence.

    This property is read-only.

    Flag for state inputs to the layer, specified as0(虚假的)或1(true).

    If theHasStateInputsproperty is0(false), then the layer has one input with the name'in', which corresponds to the input data. In this case, the layer uses theHiddenStateandCellStateproperties for the layer operation.

    If theHasStateInputsproperty is1(true), then the layer has three inputs with the names'in','hidden', and'cell', which correspond to the input data, hidden state, and cell state, respectively. In this case, the layer uses the values passed to these inputs for the layer operation. IfHasStateInputsis1(true), then theHiddenStateandCellStateproperties must be empty.

    This property is read-only.

    Flag for state outputs from the layer, specified as0(虚假的)或1(true).

    If theHasStateOutputsproperty is0(false), then the layer has one output with the name'out', which corresponds to the output data.

    If theHasStateOutputsproperty is1(true), then the layer has three outputs with the names'out','hidden', and'cell', which correspond to the output data, hidden state, and cell state, respectively. In this case, the layer also outputs the state values that it computes.

    This property is read-only.

    Input size, specified as a positive integer or'auto'. IfInputSizeis'auto', then the software automatically assigns the input size at training time.

    Data Types:double|char

    Activations

    This property is read-only.

    Activation function to update the cell and hidden state, specified as one of these values:

    • 'tanh'— Use the hyperbolic tangent function (tanh).

    • 'softsign'— Use the softsign function softsign ( x ) = x 1 + | x | .

    The layer uses this option as the function σ c in the calculations to update the cell and hidden state. For more information on how an LSTM layer uses activation functions, seeLong Short-Term Memory Layer.

    This property is read-only.

    Activation function to apply to the gates, specified as one of these values:

    • 'sigmoid'— Use the sigmoid function σ ( x ) = ( 1 + e x ) 1 .

    • 'hard-sigmoid'— Use the hard sigmoid function

      σ ( x ) = { 0 0.2 x + 0.5 1 if x < 2.5 if 2.5 x 2.5 if x > 2.5 .

    The layer uses this option as the function σ g in the calculations for the layer gates.

    State

    Cell state to use in the layer operation, specified as aNumHiddenUnits1数字向量。这个值对应于the initial cell state when data is passed to the layer.

    After you set this property manually, calls to theresetStatefunction set the cell state to this value.

    IfHasStateInputsis1(true), then theCellStateproperty must be empty.

    Data Types:single|double

    Hidden state to use in the layer operation, specified as aNumHiddenUnits1数字向量。这个值对应于the initial hidden state when data is passed to the layer.

    After you set this property manually, calls to theresetStatefunction set the hidden state to this value.

    IfHasStateInputsis1(true), then theHiddenStateproperty must be empty.

    Data Types:single|double

    Parameters and Initialization

    Function to initialize the input weights, specified as one of these values:

    • 'glorot'— Initialize the input weights with the Glorot initializer[1](also known as the Xavier initializer). The Glorot initializer independently samples from a uniform distribution with zero mean and a variance of2/(InputProjectorSize + numOut), wherenumOut = 4*NumHiddenUnits.

    • 'he'— Initialize the input weights with the He initializer[2]. The He initializer samples from a normal distribution with zero mean and a variance of2/InputProjectorSize.

    • 'orthogonal'— Initialize the input weights withQ, the orthogonal matrix in the QR decomposition ofZ=QRfor a random matrixZsampled from a unit normal distribution[3].

    • 'narrow-normal'— Initialize the input weights by independently sampling from a normal distribution with zero mean and a standard deviation of 0.01.

    • 'zeros'— Initialize the input weights with zeros.

    • 'ones'— Initialize the input weights with ones.

    • Function handle — Initialize the input weights with a custom function. If you specify a function handle, then the function must be of the formweights = func(sz), whereszis the size of the input weights.

    The layer only initializes the input weights when theInputWeightsproperty is empty.

    Data Types:char|string|function_handle

    Function to initialize the recurrent weights, specified as one of the following:

    • 'orthogonal'— Initialize the recurrent weights withQ, the orthogonal matrix in the QR decomposition ofZ=QRfor a random matrixZsampled from a unit normal distribution[3].

    • 'glorot'— Initialize the recurrent weights with the Glorot initializer[1](also known as the Xavier initializer). The Glorot initializer independently samples from a uniform distribution with zero mean and a variance of2/(numIn + numOut), wherenumIn = OutputProjectorSizeandnumOut = 4*NumHiddenUnits.

    • 'he'— Initialize the recurrent weights with the He initializer[2]. The He initializer samples from a normal distribution with zero mean and a variance of2/OutputProjectorSize.

    • 'narrow-normal'— Initialize the recurrent weights by independently sampling from a normal distribution with zero mean and a standard deviation of 0.01.

    • 'zeros'— Initialize the recurrent weights with zeros.

    • 'ones'— Initialize the recurrent weights with ones.

    • Function handle — Initialize the recurrent weights with a custom function. If you specify a function handle, then the function must be of the formweights = func(sz), whereszis the size of the recurrent weights.

    The layer only initializes the recurrent weights when theRecurrentWeightsproperty is empty.

    Data Types:char|string|function_handle

    Function to initialize the input projector, specified as one of the following:

    • 'orthogonal'— Initialize the input projector withQ, the orthogonal matrix in the QR decomposition ofZ=QRfor a random matrixZsampled from a unit normal distribution[3].

    • 'glorot'— Initialize the input projector with the Glorot initializer[1](also known as the Xavier initializer). The Glorot initializer independently samples from a uniform distribution with zero mean and a variance of2/(InputSize + inputProjectorSize).

    • 'he'— Initialize the input projector with the He initializer[2]. The He initializer samples from a normal distribution with zero mean and a variance of2/InputSize.

    • 'narrow-normal'— Initialize the input projector by independently sampling from a normal distribution with zero mean and a standard deviation of 0.01.

    • 'zeros'— Initialize the input weights with zeros.

    • 'ones'— Initialize the input weights with ones.

    • Function handle — Initialize the input projector with a custom function. If you specify a function handle, then the function must be of the formweights = func(sz), whereszis the size of the input projector.

    The layer only initializes the input projector when theInputProjectorproperty is empty.

    Data Types:char|string|function_handle

    Function to initialize the output projector, specified as one of the following:

    • 'orthogonal'— Initialize the output projector withQ, the orthogonal matrix in the QR decomposition ofZ=QRfor a random matrixZsampled from a unit normal distribution[3].

    • 'glorot'— Initialize the output projector with the Glorot initializer[1](also known as the Xavier initializer). The Glorot initializer independently samples from a uniform distribution with zero mean and a variance of2/(NumHiddenUnits + OutputProjectorSize).

    • 'he'— Initialize the output projector with the He initializer[2]. The He initializer samples from a normal distribution with zero mean and a variance of2/NumHiddenUnits.

    • 'narrow-normal'— Initialize the output projector by independently sampling from a normal distribution with zero mean and a standard deviation of 0.01.

    • 'zeros'— Initialize the output projector with zeros.

    • 'ones'— Initialize the output projector with ones.

    • Function handle — Initialize the output projector with a custom function. If you specify a function handle, then the function must be of the formweights = func(sz), whereszis the size of the output projector.

    The layer only initializes the output projector when theOutputProjectorproperty is empty.

    Data Types:char|string|function_handle

    Function to initialize the bias, specified as one of these values:

    • 'unit-forget-gate'— Initialize the forget gate bias with ones and the remaining biases with zeros.

    • 'narrow-normal'— Initialize the bias by independently sampling from a normal distribution with zero mean and a standard deviation of 0.01.

    • 'ones'— Initialize the bias with ones.

    • Function handle — Initialize the bias with a custom function. If you specify a function handle, then the function must be of the formbias = func(sz), whereszis the size of the bias.

    The layer only initializes the bias when theBiasproperty is empty.

    Data Types:char|string|function_handle

    Input weights, specified as a matrix.

    The input weight matrix is a concatenation of the four input weight matrices for the components (gates) in the LSTM layer. The layer vertically concatenates the four matrices in this order:

    1. Input gate

    2. Forget gate

    3. Cell candidate

    4. Output gate

    The input weights are learnable parameters. When you train a neural network using thetrainNetworkfunction, ifInputWeightsis nonempty, then the software uses theInputWeightsproperty as the initial value. IfInputWeightsis empty, then the software uses the initializer specified byInputWeightsInitializer.

    At training time,InputWeightsis a4*NumHiddenUnits-by-InputProjectorSizematrix.

    Recurrent weights, specified as a matrix.

    The recurrent weight matrix is a concatenation of the four recurrent weight matrices for the components (gates) in the LSTM layer. The layer vertically concatenates the four matrices in this order:

    1. Input gate

    2. Forget gate

    3. Cell candidate

    4. Output gate

    复发性权重可学的参数。When you train an RNN using thetrainNetworkfunction, ifRecurrentWeightsis nonempty, then the software uses theRecurrentWeightsproperty as the initial value. IfRecurrentWeightsis empty, then the software uses the initializer specified byRecurrentWeightsInitializer.

    At training time,RecurrentWeightsis a4*NumHiddenUnits-by-OutputProjectorSizematrix.

    Input projector, specified as a matrix.

    The input projector weights are learnable parameters. When you train a network using thetrainNetworkfunction, ifInputProjectoris nonempty, then the software uses theInputProjectorproperty as the initial value. IfInputProjectoris empty, then the software uses the initializer specified byInputProjectorInitializer.

    At training time,InputProjectoris aInputSize-by-InputProjectorSizematrix.

    Data Types:single|double

    Output projector, specified as a matrix.

    The output projector weights are learnable parameters. When you train a network using thetrainNetworkfunction, ifOutputProjectoris nonempty, then the software uses theOutputProjectorproperty as the initial value. IfOutputProjectoris empty, then the software uses the initializer specified byOutputProjectorInitializer.

    At training time,OutputProjectoris aNumHiddenUnits-by-OutputProjectorSizematrix.

    Data Types:single|double

    Layer biases, specified as a numeric vector.

    The bias vector is a concatenation of the four bias vectors for the components (gates) in the layer. The layer vertically concatenates the four vectors in this order:

    1. Input gate

    2. Forget gate

    3. Cell candidate

    4. Output gate

    The layer biases are learnable parameters. When you train a neural network, ifBiasis nonempty, thentrainNetworkuses theBiasproperty as the initial value. IfBiasis empty, thentrainNetworkuses the initializer specified byBiasInitializer.

    At training time,Biasis a4*NumHiddenUnits1数字向量。

    Learning Rate and Regularization

    Learning rate factor for the input weights, specified as a nonnegative scalar or a 1-by-4 numeric vector.

    软件增加这个因素the global learning rate to determine the learning rate factor for the input weights of the layer. For example, ifInputWeightsLearnRateFactoris2, then the learning rate factor for the input weights of the layer is twice the current global learning rate. The software determines the global learning rate based on the settings you specify with thetrainingOptionsfunction.

    To control the value of the learning rate factor for the four individual matrices inInputWeights, specify a 1-by-4 vector. The entries ofInputWeightsLearnRateFactorcorrespond to the learning rate factor of these components:

    1. Input gate

    2. Forget gate

    3. Cell candidate

    4. Output gate

    To specify the same value for all the matrices, specify a nonnegative scalar.

    Example:2

    Example:[1 2 1 1]

    Learning rate factor for the recurrent weights, specified as a nonnegative scalar or a 1-by-4 numeric vector.

    软件增加这个因素the global learning rate to determine the learning rate for the recurrent weights of the layer. For example, ifRecurrentWeightsLearnRateFactoris2, then the learning rate for the recurrent weights of the layer is twice the current global learning rate. The software determines the global learning rate based on the settings you specify using thetrainingOptionsfunction.

    To control the value of the learning rate factor for the four individual matrices inRecurrentWeights, specify a 1-by-4 vector. The entries ofRecurrentWeightsLearnRateFactorcorrespond to the learning rate factor of these components:

    1. Input gate

    2. Forget gate

    3. Cell candidate

    4. Output gate

    To specify the same value for all the matrices, specify a nonnegative scalar.

    Example:2

    Example:[1 2 1 1]

    Learning rate factor for the input projector, specified as a nonnegative scalar.

    软件增加这个因素the global learning rate to determine the learning rate factor for the input projector of the layer. For example, ifInputProjectorLearnRateFactoris2, then the learning rate factor for the input projector of the layer is twice the current global learning rate. The software determines the global learning rate based on the settings you specify using thetrainingOptionsfunction.

    Learning rate factor for the output projector, specified as a nonnegative scalar.

    软件增加这个因素the global learning rate to determine the learning rate factor for the output projector of the layer. For example, ifOutputProjectorLearnRateFactoris2, then the learning rate factor for the output projector of the layer is twice the current global learning rate. The software determines the global learning rate based on the settings you specify using thetrainingOptionsfunction.

    Learning rate factor for the biases, specified as a nonnegative scalar or a 1-by-4 numeric vector.

    软件增加这个因素the global learning rate to determine the learning rate for the biases in this layer. For example, ifBiasLearnRateFactoris2, then the learning rate for the biases in the layer is twice the current global learning rate. The software determines the global learning rate based on the settings you specify using thetrainingOptionsfunction.

    To control the value of the learning rate factor for the four individual vectors inBias, specify a 1-by-4 vector. The entries ofBiasLearnRateFactorcorrespond to the learning rate factor of these components:

    1. Input gate

    2. Forget gate

    3. Cell candidate

    4. Output gate

    To specify the same value for all the vectors, specify a nonnegative scalar.

    Example:2

    Example:[1 2 1 1]

    L2regularization factor for the input weights, specified as a nonnegative scalar or a 1-by-4 numeric vector.

    软件增加这个因素the globalL2regularization factor to determine theL2regularization factor for the input weights of the layer. For example, ifInputWeightsL2Factoris2, then theL2regularization factor for the input weights of the layer is twice the current globalL2regularization factor. The software determines theL2regularization factor based on the settings you specify using thetrainingOptionsfunction.

    To control the value of theL2regularization factor for the four individual matrices inInputWeights, specify a 1-by-4 vector. The entries ofInputWeightsL2Factorcorrespond to theL2regularization factor of these components:

    1. Input gate

    2. Forget gate

    3. Cell candidate

    4. Output gate

    To specify the same value for all the matrices, specify a nonnegative scalar.

    Example:2

    Example:[1 2 1 1]

    L2regularization factor for the recurrent weights, specified as a nonnegative scalar or a 1-by-4 numeric vector.

    软件增加这个因素the globalL2regularization factor to determine theL2regularization factor for the recurrent weights of the layer. For example, ifRecurrentWeightsL2Factoris2, then theL2regularization factor for the recurrent weights of the layer is twice the current globalL2regularization factor. The software determines theL2regularization factor based on the settings you specify using thetrainingOptionsfunction.

    To control the value of theL2regularization factor for the four individual matrices inRecurrentWeights, specify a 1-by-4 vector. The entries ofRecurrentWeightsL2Factorcorrespond to theL2regularization factor of these components:

    1. Input gate

    2. Forget gate

    3. Cell candidate

    4. Output gate

    To specify the same value for all the matrices, specify a nonnegative scalar.

    Example:2

    Example:[1 2 1 1]

    L2regularization factor for the input projector, specified as a nonnegative scalar.

    软件增加这个因素the globalL2regularization factor to determine theL2regularization factor for the input projector of the layer. For example, ifInputProjectorL2Factoris2, then theL2regularization factor for the input projector of the layer is twice the current globalL2regularization factor. The software determines the globalL2regularization factor based on the settings you specify using thetrainingOptionsfunction.

    L2regularization factor for the output projector, specified as a nonnegative scalar.

    软件增加这个因素the globalL2regularization factor to determine theL2regularization factor for the output projector of the layer. For example, ifOutputProjectorL2Factoris2, then theL2regularization factor for the output projector of the layer is twice the current globalL2regularization factor. The software determines the globalL2regularization factor based on the settings you specify using thetrainingOptionsfunction.

    L2regularization factor for the biases, specified as a nonnegative scalar or a 1-by-4 numeric vector.

    软件增加这个因素the globalL2regularization factor to determine theL2regularization for the biases in this layer. For example, ifBiasL2Factoris2, then theL2regularization for the biases in this layer is twice the globalL2regularization factor. The software determines the globalL2regularization factor based on the settings you specify using thetrainingOptionsfunction.

    To control the value of theL2regularization factor for the four individual vectors inBias, specify a 1-by-4 vector. The entries ofBiasL2Factorcorrespond to theL2regularization factor of these components:

    1. Input gate

    2. Forget gate

    3. Cell candidate

    4. Output gate

    To specify the same value for all the vectors, specify a nonnegative scalar.

    Example:2

    Example:[1 2 1 1]

    Layer

    Layer name, specified as a character vector or a string scalar. ForLayerarray input, thetrainNetwork,assembleNetwork,layerGraph, anddlnetworkfunctions automatically assign names to layers with the name''.

    Data Types:char|string

    This property is read-only.

    Number of inputs to the layer.

    If theHasStateInputsproperty is0(false), then the layer has one input with the name'in', which corresponds to the input data. In this case, the layer uses theHiddenStateandCellStateproperties for the layer operation.

    If theHasStateInputsproperty is1(true), then the layer has three inputs with the names'in','hidden', and'cell', which correspond to the input data, hidden state, and cell state, respectively. In this case, the layer uses the values passed to these inputs for the layer operation. IfHasStateInputsis1(true), then theHiddenStateandCellStateproperties must be empty.

    Data Types:double

    This property is read-only.

    Input names of the layer.

    If theHasStateInputsproperty is0(false), then the layer has one input with the name'in', which corresponds to the input data. In this case, the layer uses theHiddenStateandCellStateproperties for the layer operation.

    If theHasStateInputsproperty is1(true), then the layer has three inputs with the names'in','hidden', and'cell', which correspond to the input data, hidden state, and cell state, respectively. In this case, the layer uses the values passed to these inputs for the layer operation. IfHasStateInputsis1(true), then theHiddenStateandCellStateproperties must be empty.

    This property is read-only.

    Number of outputs to the layer.

    If theHasStateOutputsproperty is0(false), then the layer has one output with the name'out', which corresponds to the output data.

    If theHasStateOutputsproperty is1(true), then the layer has three outputs with the names'out','hidden', and'cell', which correspond to the output data, hidden state, and cell state, respectively. In this case, the layer also outputs the state values that it computes.

    Data Types:double

    This property is read-only.

    Output names of the layer.

    If theHasStateOutputsproperty is0(false), then the layer has one output with the name'out', which corresponds to the output data.

    If theHasStateOutputsproperty is1(true), then the layer has three outputs with the names'out','hidden', and'cell', which correspond to the output data, hidden state, and cell state, respectively. In this case, the layer also outputs the state values that it computes.

    Examples

    collapse all

    Create an LSTM projected layer with 100 hidden units, an output projector size of 30, an input projector size of 16, and the name"lstmp".

    layer = lstmProjectedLayer(100,30,16,Name="lstmp")
    layer = LSTMProjectedLayer with properties: Name: 'lstmp' InputNames: {'in'} OutputNames: {'out'} NumInputs: 1 NumOutputs: 1 HasStateInputs: 0 HasStateOutputs: 0 Hyperparameters InputSize: 'auto' NumHiddenUnits: 100 InputProjectorSize: 16 OutputProjectorSize: 30 OutputMode: 'sequence' StateActivationFunction: 'tanh' GateActivationFunction: 'sigmoid' Learnable Parameters InputWeights: [] RecurrentWeights: [] Bias: [] InputProjector: [] OutputProjector: [] State Parameters HiddenState: [] CellState: [] Show all properties

    Include an LSTM projected layer in a layer array.

    inputSize = 12;numHiddenUnits = 100;outputProjectorSize = max(1,floor(0.75*numHiddenUnits)); inputProjectorSize = max(1,floor(0.25*inputSize)); layers = [ sequenceInputLayer(inputSize) lstmProjectedLayer(numHiddenUnits,outputProjectorSize,inputProjectorSize) fullyConnectedLayer(10) softmaxLayer classificationLayer];

    比较网络的大小和不contain projected layers.

    Define an LSTM network architecture. Specify the input size as 12, which corresponds to the number of features of the input data. Configure an LSTM layer with 100 hidden units that outputs the last element of the sequence. Finally, specify nine classes by including a fully connected layer of size 9, followed by a softmax layer and a classification layer.

    inputSize = 12;numHiddenUnits = 100;numClasses = 9; layers = [...sequenceInputLayer(inputSize) lstmLayer(numHiddenUnits,OutputMode="last") fullyConnectedLayer(numClasses) softmaxLayer classificationLayer]
    layers = 5x1 Layer array with layers: 1 '' Sequence Input Sequence input with 12 dimensions 2 '' LSTM LSTM with 100 hidden units 3 '' Fully Connected 9 fully connected layer 4 '' Softmax softmax 5 '' Classification Output crossentropyex

    Analyze the network using theanalyzeNetworkfunction. The network has approximately 46,100 learnable parameters.

    analyzeNetwork(layers)

    Create an identical network with an LSTM projected layer in place of the LSTM layer.

    For the LSTM projected layer:

    • Specify the same number of hidden units as the LSTM layer

    • Specify an output projector size of 25% of the number of hidden units.

    • Specify an input projector size of 75% of the input size.

    • Ensure that the output and input projector sizes are positive by taking the maximum of the sizes and 1.

    outputProjectorSize = max(1,floor(0.25*numHiddenUnits)); inputProjectorSize = max(1,floor(0.75*inputSize)); layersProjected = [...sequenceInputLayer(inputSize) lstmProjectedLayer(numHiddenUnits,outputProjectorSize,inputProjectorSize,OutputMode="last") fullyConnectedLayer(numClasses) softmaxLayer classificationLayer];

    Analyze the network using theanalyzeNetworkfunction. The network has approximately 17,500 learnable parameters, which is a reduction of more than half. The sizes of the learnable parameters of the layers following the projected layer have the same sizes as the network without the LSTM projected layer. Reducing the number of learnable parameters by projecting an LSTM layer rather than reducing the number of hidden units of the LSTM layer maintains the output size of the layer and, in turn, the sizes of the downstream layers, which can result in better prediction accuracy.

    analyzeNetwork(layersProjected)

    Algorithms

    expand all

    References

    [1] Glorot, Xavier, and Yoshua Bengio. "Understanding the Difficulty of Training Deep Feedforward Neural Networks." InProceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–356. Sardinia, Italy: AISTATS, 2010.https://proceedings.mlr.press/v9/glorot10a/glorot10a.pdf

    [2] He, Kaiming, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. "Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification." InProceedings of the 2015 IEEE International Conference on Computer Vision, 1026–1034. Washington, DC: IEEE Computer Vision Society, 2015.https://doi.org/10.1109/ICCV.2015.123

    [3] Saxe, Andrew M., James L. McClelland, and Surya Ganguli. "Exact solutions to the nonlinear dynamics of learning in deep linear neural networks."arXiv preprint arXiv:1312.6120(2013).

    Extended Capabilities

    Version History

    Introduced in R2022b