Main Content

Check Custom Layer Validity

If you create a custom deep learning layer, then you can use thecheckLayerfunction to check that the layer is valid. The function checks layers for validity, GPU compatibility, correctly defined gradients, and code generation compatibility. To check that a layer is valid, run the following command:

checkLayer(layer,validInputSize)
wherelayeris an instance of the layer,validInputSizeis a vector or cell array specifying the valid input sizes to the layer. To check with multiple observations, use theObservationDimension选择。To check for code generation compatibility, set theCheckCodegenCompatibilityoption to1(true).For large input sizes, the gradient checks take longer to run. To speed up the tests, specify a smaller valid input size.

Check Custom Layer Validity

Check the validity of the example custom layerpreluLayer.

The custom layerpreluLayer, attached to this is example as a supporting file, applies the PReLU operation to the input data. To access this layer, open this example as a live script.

Create an instance of the layer and check that it is valid usingcheckLayer. Set the valid input size to the typical size of a single observation input to the layer. For a single input, the layer expects observations of sizeh-by-w-by-c, whereh,w, andcare the height, width, and number of channels of the previous layer output, respectively.

SpecifyvalidInputSizeas the typical size of an input array.

layer = preluLayer(20); validInputSize = [5 5 20]; checkLayer(layer,validInputSize)
Skipping multi-observation tests. To enable tests with multiple observations, specify the 'ObservationDimension' option. For 2-D image data, set 'ObservationDimension' to 4. For 3-D image data, set 'ObservationDimension' to 5. For sequence data, set 'ObservationDimension' to 2. Skipping GPU tests. No compatible GPU device found. Skipping code generation compatibility tests. To check validity of the layer for code generation, specify the 'CheckCodegenCompatibility' and 'ObservationDimension' options. Running nnet.checklayer.TestLayerWithoutBackward .......... .. Done nnet.checklayer.TestLayerWithoutBackward __________ Test Summary: 12 Passed, 0 Failed, 0 Incomplete, 16 Skipped. Time elapsed: 0.18741 seconds.

The results show the number of passed, failed, and skipped tests. If you do not specify theObservationsDimensionoption, or do not have a GPU, then the function skips the corresponding tests.

Check Multiple Observations

For multi-observation input, the layer expects an array of observations of sizeh-by-w-by-c-by-N, whereh,w, andcare the height, width, and number of channels, respectively, andNis the number of observations.

To check the layer validity for multiple observations, specify the typical size of an observation and set theObservationDimensionoption to 4.

layer = preluLayer(20); validInputSize = [5 5 20]; checkLayer(layer,validInputSize,ObservationDimension=4)
Skipping GPU tests. No compatible GPU device found. Skipping code generation compatibility tests. To check validity of the layer for code generation, specify the 'CheckCodegenCompatibility' and 'ObservationDimension' options. Running nnet.checklayer.TestLayerWithoutBackward .......... ........ Done nnet.checklayer.TestLayerWithoutBackward __________ Test Summary: 18 Passed, 0 Failed, 0 Incomplete, 10 Skipped. Time elapsed: 0.19972 seconds.

In this case, the function does not detect any issues with the layer.

List of Tests

ThecheckLayerfunction checks the validity of a custom layer by performing a series of tests.

Intermediate Layers

ThecheckLayerfunction uses these tests to check the validity of custom intermediate layers (layers of typennet.layer.Layer).

Test Description
functionSyntaxesAreCorrect The syntaxes of the layer functions are correctly defined.
predictDoesNotError predictfunction does not error.
forwardDoesNotError

When specified, theforwardfunction does not error.

forwardPredictAreConsistentInSize

Whenforwardis specified,forwardandpredictoutput values of the same size.

backwardDoesNotError When specified,backwarddoes not error.
backwardIsConsistentInSize

Whenbackwardis specified, the outputs ofbackwardare consistent in size:

  • The derivatives with respect to each input are the same size as the corresponding input.

  • The derivatives with respect to each learnable parameter are the same size as the corresponding learnable parameter.

predictIsConsistentInType

The outputs ofpredictare consistent in type with the inputs.

forwardIsConsistentInType

Whenforwardis specified, the outputs offorwardare consistent in type with the inputs.

backwardIsConsistentInType

Whenbackwardis specified, the outputs ofbackwardare consistent in type with the inputs.

gradientsAreNumericallyCorrect Whenbackwardis specified, the gradients computed inbackwardare consistent with the numerical gradients.
backwardPropagationDoesNotError Whenbackwardis not specified, the derivatives can be computed using automatic differentiation.
predictReturnsValidStates For layers with state properties, thepredictfunction returns valid states.
forwardReturnsValidStates For layers with state properties, theforwardfunction, if specified, returns valid states.
resetStateDoesNotError For layers with state properties, theresetStatefunction, if specified, does not error and resets the states to valid states.
codegenPragmaDefinedInClassDef The pragma"%#codegen"for code generation is specified in class file.
checkForSupportedLayerPropertiesForCodegen The layer properties support code generation.
predictIsValidForCodeGeneration predictis valid for code generation.
doesNotHaveStateProperties For code generation, the layer does not have state properties.
supportedFunctionLayer For code generation, the layer is not aFunctionLayerobject.

Some tests run multiple times. These tests also check different data types and for GPU compatibility:

  • predictIsConsistentInType

  • forwardIsConsistentInType

  • backwardIsConsistentInType

To execute the layer functions on a GPU, the functions must support inputs and outputs of typegpuArray与底层数据类型single.

Output Layers

ThecheckLayerfunction uses these tests to check the validity of custom output layers (layers of typennet.layer.ClassificationLayerornnet.layer.RegressionLayer).

Test Description
forwardLossDoesNotError forwardLossdoes not error.
backwardLossDoesNotError backwardLossdoes not error.
forwardLossIsScalar The output offorwardLossis scalar.
backwardLossIsConsistentInSize WhenbackwardLossis specified, the output ofbackwardLossis consistent in size:dLdYis the same size as the predictionsY.
forwardLossIsConsistentInType

The output offorwardLossis consistent in type:lossis the same type as the predictionsY.

backwardLossIsConsistentInType

WhenbackwardLossis specified, the output ofbackwardLossis consistent in type:dLdYmust be the same type as the predictionsY.

gradientsAreNumericallyCorrect WhenbackwardLossis specified, the gradients computed inbackwardLossare numerically correct.
backwardPropagationDoesNotError WhenbackwardLossis not specified, the derivatives can be computed using automatic differentiation.

TheforwardLossIsConsistentInTypeandbackwardLossIsConsistentInTypetests also check for GPU compatibility. To execute the layer functions on a GPU, the functions must support inputs and outputs of typegpuArray与底层数据类型single.

Generated Data

To check the layer validity, thecheckLayerfunction generates data depending on the type of layer:

Layer Type Description of Generated Data
Intermediate Values in the range [-1,1]
回归输出 Predictions and targets with values in the range [-1,1]
Classification output

Predictions with values in the range [0,1].

如果你年代pecify theObservationDimensionoption, then the targets are one-hot encoded vectors (vectors containing a single 1, and 0 elsewhere).

If you do not specify theObservationDimensionoption, then the targets are values in the range [0,1].

To check for multiple observations, specify the observation dimension using theObservationDimension选择。如果你年代pecify the observation dimension, then thecheckLayerfunction checks that the layer functions are valid using generated data with mini-batches of size 1 and 2. If you do not specify this name-value pair, then the function skips the tests that check that the layer functions are valid for multiple observations.

Diagnostics

If a test fails when you usecheckLayer, then the function provides a test diagnostic and a framework diagnostic. The test diagnostic highlights any issues found with the layer. The framework diagnostic provides more detailed information.

Function Syntaxes

The testfunctionSyntaxesAreCorrectchecks that the layer functions have correctly defined syntaxes.

Test Diagnostic Description Possible Solution
Incorrect number of input arguments for 'predict' in Layer. The syntax for thepredictfunction is not consistent with the number of layer inputs.

Specify the correct number of input and output arguments inpredict.

Thepredictfunction syntax depends on the type of layer.

  • Z = predict(layer,X)forwards the input dataXthrough the layer and outputs the resultZ, wherelayerhas a single input, a single output.

  • [Z,state] = predict(layer,X)also outputs the updated state parameterstate, wherelayerhas a single state parameter.

You can adjust the syntaxes for layers with multiple inputs, multiple outputs, or multiple state parameters:

  • For layers with multiple inputs, replaceXwithX1,...,XN, whereNis the number of inputs. TheNumInputsproperty must matchN.

  • For layers with multiple outputs, replaceZwithZ1,...,ZM, whereMis the number of outputs. TheNumOutputsproperty must matchM.

  • For layers with multiple state parameters, replacestatewithstate1,...,stateK, whereKis the number of state parameters.

Tip

If the number of inputs to the layer can vary, then usevarargininstead ofX1,…,XN. In this case,varargin是一个单元阵列的输入,在哪里varargin{i}corresponds toXi.

If the number of outputs can vary, then usevarargoutinstead ofZ1,…,ZN. In this case,varargoutis a cell array of the outputs, wherevarargout{j}corresponds toZj.

Tip

If the custom layer has adlnetworkobject for a learnable parameter, then in thepredictfunction of the custom layer, use thepredictfunction for thedlnetwork. Using thedlnetworkobjectpredictfunction ensures that the software uses the correct layer operations for prediction.

Incorrect number of output arguments for 'predict' in Layer The syntax for thepredictfunction is not consistent with the number of layer outputs.
Incorrect number of input arguments for 'forward' in Layer The syntax for the optionalforwardfunction is not consistent with the number of layer inputs.

Specify the correct number of input and output arguments inforward.

Theforwardfunction syntax depends on the type of layer:

  • Z = forward(layer,X)forwards the input dataXthrough the layer and outputs the resultZ, wherelayerhas a single input, a single output.

  • [Z,state] = forward(layer,X)also outputs the updated state parameterstate, wherelayerhas a single state parameter.

  • [__,memory] = forward(layer,X)also returns a memory value for a custombackwardfunction using any of the previous syntaxes. If the layer has both a customforwardfunction and a custombackwardfunction, then the forward function must return a memory value.

You can adjust the syntaxes for layers with multiple inputs, multiple outputs, or multiple state parameters:

  • For layers with multiple inputs, replaceXwithX1,...,XN, whereNis the number of inputs. TheNumInputsproperty must matchN.

  • For layers with multiple outputs, replaceZwithZ1,...,ZM, whereMis the number of outputs. TheNumOutputsproperty must matchM.

  • For layers with multiple state parameters, replacestatewithstate1,...,stateK, whereKis the number of state parameters.

Tip

If the number of inputs to the layer can vary, then usevarargininstead ofX1,…,XN. In this case,varargin是一个单元阵列的输入,在哪里varargin{i}corresponds toXi.

If the number of outputs can vary, then usevarargoutinstead ofZ1,…,ZN. In this case,varargoutis a cell array of the outputs, wherevarargout{j}corresponds toZj.

Tip

If the custom layer has adlnetworkobject for a learnable parameter, then in theforwardfunction of the custom layer, use theforwardfunction of thedlnetworkobject. Using thedlnetworkobjectforwardfunction ensures that the software uses the correct layer operations for training.

Incorrect number of output arguments for 'forward' in Layer The syntax for the optionalforwardfunction is not consistent with the number of layer outputs.
Incorrect number of input arguments for 'backward' in Layer The syntax for the optionalbackwardfunction is not consistent with the number of layer inputs and outputs.

Specify the correct number of input and output arguments inbackward.

Thebackwardfunction syntax depends on the type of layer.

  • dLdX = backward(layer,X,Z,dLdZ,memory)returns the derivativesdLdXof the loss with respect to the layer input, wherelayerhas a single input and a single output.Zcorresponds to the forward function output anddLdZ对应的导数与res损失pect toZ. The function inputmemorycorresponds to the memory output of the forward function.

  • [dLdX,dLdW] = backward(layer,X,Z,dLdZ,memory)also returns the derivativedLdWof the loss with respect to the learnable parameter, wherelayerhas a single learnable parameter.

  • [dLdX,dLdSin] = backward(layer,X,Z,dLdZ,dLdSout,memory)also returns the derivativedLdSinof the loss with respect to the state input using any of the previous syntaxes, wherelayerhas a single state parameter anddLdSout对应的导数与res损失pect to the layer state output.

  • [dLdX,dLdW,dLdSin] = backward(layer,X,Z,dLdZ,dLdSout,memory)also returns the derivativedLdWof the loss with respect to the learnable parameter and returns the derivativedLdSinof the loss with respect to the layer state input using any of the previous syntaxes, wherelayerhas a single state parameter and single learnable parameter.

You can adjust the syntaxes for layers with multiple inputs, multiple outputs, multiple learnable parameters, or multiple state parameters:

  • For layers with multiple inputs, replaceXanddLdXwithX1,...,XNanddLdX1,...,dLdXN, respectively, whereNis the number of inputs.

  • For layers with multiple outputs, replaceZanddLdZwithZ1,...,ZManddLdZ1,...,dLdZM, respectively, whereMis the number of outputs.

  • For layers with multiple learnable parameters, replacedLdWwithdLdW1,...,dLdWP, wherePis the number of learnable parameters.

  • For layers with multiple state parameters, replacedLdSinanddLdSoutwithdLdSin1,...,dLdSinKanddLdSout1,...,dLdSoutK, respectively, whereKis the number of state parameters.

To reduce memory usage by preventing unused variables being saved between the forward and backward pass, replace the corresponding input arguments with~.

Tip

If the number of inputs tobackward可以各有不同,n usevarargininstead of the input arguments afterlayer. In this case,varargin是一个单元阵列的输入,在哪里the firstNelements correspond to theNlayer inputs, the nextMelements correspond to theMlayer outputs, the nextMelements correspond to the derivatives of the loss with respect to theMlayer outputs, the nextKelements correspond to theKderivatives of the loss with respect to theKstates outputs, and the last element corresponds tomemory.

If the number of outputs can vary, then usevarargoutinstead of the output arguments. In this case,varargoutis a cell array of the outputs, where the firstNelements correspond to theNthe derivatives of the loss with respect to theNlayer inputs, the nextPelements correspond to the derivatives of the loss with respect to thePlearnable parameters, and the nextKelements correspond to the derivatives of the loss with respect to theKstate inputs.

Tip

If the layer forward functions supportdlarrayobjects, then the software automatically determines the backward function and you do not need to specify thebackwardfunction. For a list of functions that supportdlarrayobjects, seeList of Functions with dlarray Support.

Incorrect number of output arguments for 'backward' in Layer The syntax for the optionalbackwardfunction is not consistent with the number of layer outputs.

For layers with multiple inputs or outputs, you must set the values of the layer propertiesNumInputs(or alternatively,InputNames) andNumOutputs(or alternatively,OutputNames) in the layer constructor function, respectively.

Multiple Observations

ThecheckLayerfunction checks that the layer functions are valid for single and multiple observations.To check for multiple observations, specify the observation dimension using theObservationDimension选择。如果你年代pecify the observation dimension, then thecheckLayerfunction checks that the layer functions are valid using generated data with mini-batches of size 1 and 2. If you do not specify this name-value pair, then the function skips the tests that check that the layer functions are valid for multiple observations.

Test Diagnostic Description Possible Solution
Skipping multi-observation tests. To enable checks with multiple observations, specify the 'ObservationDimension' parameter in checkLayer. If you do not specify the'ObservationDimension'parameter incheckLayer, then the function skips the tests that check data with multiple observations.

Use the commandcheckLayer(layer,validInputSize,'ObservationDimension',dim), wherelayeris an instance of the custom layer,validInputSizeis a vector specifying the valid input size to the layer, anddimspecifies the dimension of the observations in the layer input.

For more information, seeLayer Input Sizes.

Functions Do Not Error

These tests check that the layers do not error when passed input data of valid size.

Intermediate Layers.The testspredictDoesNotError,forwardDoesNotError, andbackwardDoesNotErrorcheck that the layer functions do not error when passed inputs of valid size. If you specify an observation dimension, then the function checks the layer for both a single observation and multiple observations.

Test Diagnostic Description Possible Solution
The function 'predict' threw an error: Thepredictfunction errors when passed data of sizevalidInputSize.

Address the error described in theFramework Diagnosticsection.

Tip

If the layer forward functions supportdlarrayobjects, then the software automatically determines the backward function and you do not need to specify thebackwardfunction. For a list of functions that supportdlarrayobjects, seeList of Functions with dlarray Support.

The function 'forward' threw an error: The optionalforwardfunction errors when passed data of sizevalidInputSize.
The function 'backward' threw an error: The optionalbackwardfunction errors when passed the output ofpredict.

Output Layers.The testsforwardLossDoesNotErrorandbackwardLossDoesNotErrorcheck that the layer functions do not error when passed inputs of valid size. If you specify an observation dimension, then the function checks the layer for both a single observation and multiple observations.

Test Diagnostic Description Possible Solution
The function 'forwardLoss' threw an error: TheforwardLossfunction errors when passed data of sizevalidInputSize.

Address the error described in theFramework Diagnosticsection.

Tip

If theforwardLossfunction supportsdlarrayobjects, then the software automatically determines the backward loss function and you do not need to specify thebackwardLossfunction. For a list of functions that supportdlarrayobjects, seeList of Functions with dlarray Support.

The function 'backwardLoss' threw an error: The optionalbackwardLossfunction errors when passed data of sizevalidInputSize.

Outputs Are Consistent in Size

These tests check that the layer function outputs are consistent in size.

Intermediate Layers.The testbackwardIsConsistentInSizechecks that thebackwardfunction outputs derivatives of the correct size.

Thebackwardfunction syntax depends on the type of layer.

  • dLdX = backward(layer,X,Z,dLdZ,memory)returns the derivativesdLdXof the loss with respect to the layer input, wherelayerhas a single input and a single output.Zcorresponds to the forward function output anddLdZ对应的导数与res损失pect toZ. The function inputmemorycorresponds to the memory output of the forward function.

  • [dLdX,dLdW] = backward(layer,X,Z,dLdZ,memory)also returns the derivativedLdWof the loss with respect to the learnable parameter, wherelayerhas a single learnable parameter.

  • [dLdX,dLdSin] = backward(layer,X,Z,dLdZ,dLdSout,memory)also returns the derivativedLdSinof the loss with respect to the state input using any of the previous syntaxes, wherelayerhas a single state parameter anddLdSout对应的导数与res损失pect to the layer state output.

  • [dLdX,dLdW,dLdSin] = backward(layer,X,Z,dLdZ,dLdSout,memory)also returns the derivativedLdWof the loss with respect to the learnable parameter and returns the derivativedLdSinof the loss with respect to the layer state input using any of the previous syntaxes, wherelayerhas a single state parameter and single learnable parameter.

You can adjust the syntaxes for layers with multiple inputs, multiple outputs, multiple learnable parameters, or multiple state parameters:

  • For layers with multiple inputs, replaceXanddLdXwithX1,...,XNanddLdX1,...,dLdXN, respectively, whereNis the number of inputs.

  • For layers with multiple outputs, replaceZanddLdZwithZ1,...,ZManddLdZ1,...,dLdZM, respectively, whereMis the number of outputs.

  • For layers with multiple learnable parameters, replacedLdWwithdLdW1,...,dLdWP, wherePis the number of learnable parameters.

  • For layers with multiple state parameters, replacedLdSinanddLdSoutwithdLdSin1,...,dLdSinKanddLdSout1,...,dLdSoutK, respectively, whereKis the number of state parameters.

To reduce memory usage by preventing unused variables being saved between the forward and backward pass, replace the corresponding input arguments with~.

Tip

If the number of inputs tobackward可以各有不同,n usevarargininstead of the input arguments afterlayer. In this case,varargin是一个单元阵列的输入,在哪里the firstNelements correspond to theNlayer inputs, the nextMelements correspond to theMlayer outputs, the nextMelements correspond to the derivatives of the loss with respect to theMlayer outputs, the nextKelements correspond to theKderivatives of the loss with respect to theKstates outputs, and the last element corresponds tomemory.

If the number of outputs can vary, then usevarargoutinstead of the output arguments. In this case,varargoutis a cell array of the outputs, where the firstNelements correspond to theNthe derivatives of the loss with respect to theNlayer inputs, the nextPelements correspond to the derivatives of the loss with respect to thePlearnable parameters, and the nextKelements correspond to the derivatives of the loss with respect to theKstate inputs.

The derivativesdLdX1, …,dLdXnmust be the same size as the corresponding layer inputs, anddLdW1,…,dLdWkmust be the same size as the corresponding learnable parameters. The sizes must be consistent for input data with single and multiple observations.

Test Diagnostic Description Possible Solution
Incorrect size of 'dLdX' for 'backward'. The derivatives of the loss with respect to the layer inputs must be the same size as the corresponding layer input.

Return the derivativesdLdX1,…,dLdXnwith the same size as the corresponding layer inputsX1,…,Xn.

Incorrect size of the derivative of the loss with respect to the input 'in1' for 'backward'
The size of 'Z' returned from 'forward' must be the same as for 'predict'. The outputs ofpredictmust be the same size as the corresponding outputs offorward.

Return the outputsZ1,…,Zmofpredictwith the same size as the corresponding outputsZ1,…,Zmofforward.

Incorrect size of the derivative of the loss with respect to 'W' for 'backward'. The derivatives of the loss with respect to the learnable parameters must be the same size as the corresponding learnable parameters.

Return the derivativesdLdW1,…,dLdWkwith the same size as the corresponding learnable parametersW1,…,Wk.

Tip

If the layer forward functions supportdlarrayobjects, then the software automatically determines the backward function and you do not need to specify thebackwardfunction. For a list of functions that supportdlarrayobjects, seeList of Functions with dlarray Support.

Output Layers.The testforwardLossIsScalarchecks that the output of theforwardLossfunction is scalar. When thebackwardLossfunction is specified, the testbackwardLossIsConsistentInSizechecks that the outputs offorwardLossandbackwardLoss是正确的大小。

The syntax forforwardLossisloss = forwardLoss(layer,Y,T). The inputYcorresponds to the predictions made by the network. These predictions are the output of the previous layer. The inputTcorresponds to the training targets. The outputlossis the loss betweenYandTaccording to the specified loss function. The outputlossmust be scalar.

If theforwardLossfunction supportsdlarrayobjects, then the software automatically determines the backward loss function and you do not need to specify thebackwardLossfunction. For a list of functions that supportdlarrayobjects, seeList of Functions with dlarray Support.

The syntax forbackwardLossisdLdY = backwardLoss(layer,Y,T). The inputYcontains the predictions made by the network andTcontains the training targets. The outputdLdYis the derivative of the loss with respect to the predictionsY. The outputdLdYmust be the same size as the layer inputY.

Test Diagnostic Description Possible Solution
Incorrect size of 'loss' for 'forwardLoss'. The outputlossofforwardLossmust be a scalar.

Return the outputlossas a scalar. For example, if you have multiple values of the loss, then you can usemeanorsum.

Incorrect size of the derivative of loss 'dLdY' for 'backwardLoss'. WhenbackwardLossis specified, the derivatives of the loss with respect to the layer input must be the same size as the layer input.

Return derivativedLdYwith the same size as the layer inputY.

If theforwardLossfunction supportsdlarrayobjects, then the software automatically determines the backward loss function and you do not need to specify thebackwardLossfunction. For a list of functions that supportdlarrayobjects, seeList of Functions with dlarray Support.

Consistent Data Types and GPU Compatibility

These tests check that the layer function outputs are consistent in type and that the layer functions are GPU compatible.

If the layer forward functions fully supportdlarrayobjects, then the layer is GPU compatible. Otherwise, to be GPU compatible, the layer functions must support inputs and return outputs of typegpuArray(Parallel Computing Toolbox).

Many MATLAB®built-in functions supportgpuArray(Parallel Computing Toolbox)anddlarrayinput arguments. For a list of functions that supportdlarrayobjects, seeList of Functions with dlarray Support. For a list of functions that execute on a GPU, seeRun MATLAB Functions on a GPU(Parallel Computing Toolbox).To use a GPU for deep learning, you must also have a supported GPU device. For information on supported devices, seeGPU Support by Release(Parallel Computing Toolbox).For more information on working with GPUs in MATLAB, seeGPU Computing in MATLAB(Parallel Computing Toolbox).

Intermediate Layers.The testspredictIsConsistentInType,forwardIsConsistentInType, andbackwardIsConsistentInTypecheck that the layer functions output variables of the correct data type. The tests check that the layer functions return consistent data types when given inputs of the data typessingle,double, andgpuArraywith the underlying typessingleordouble.

Tip

If you preallocate arrays using functions such aszeros, then you must ensure that the data types of these arrays are consistent with the layer function inputs. To create an array of zeros of the same data type as another array, use the"like"option ofzeros. For example, to initialize an array of zeros of sizeszwith the same data type as the arrayX, useZ = zeros(sz,"like",X).

Test Diagnostic Description Possible Solution
Incorrect type of 'Z' for 'predict'. The types of the outputsZ1,…,Zmof thepredictfunction must be consistent with the inputsX1,…,Xn.

Return the outputsZ1,…,Zmwith the same type as the inputsX1,…,Xn.

Incorrect type of output 'out1' for 'predict'.
Incorrect type of 'Z' for 'forward'. The types of the outputsZ1,…,Zmof the optionalforwardfunction must be consistent with the inputsX1,…,Xn.
Incorrect type of output 'out1' for 'forward'.
Incorrect type of 'dLdX' for 'backward'. The types of the derivativesdLdX1,…,dLdXnof the optionalbackwardfunction must be consistent with the inputsX1,…,Xn.

Return the derivativesdLdX1,…,dLdXnwith the same type as the inputsX1,…,Xn.

Incorrect type of the derivative of the loss with respect to the input 'in1' for 'backward'.
Incorrect type of the derivative of loss with respect to 'W' for 'backward'. The type of the derivative of the loss of the learnable parameters must be consistent with the corresponding learnable parameters.

For each learnable parameter, return the derivative with the same type as the corresponding learnable parameter.

Tip

If the layer forward functions supportdlarrayobjects, then the software automatically determines the backward function and you do not need to specify thebackwardfunction. For a list of functions that supportdlarrayobjects, seeList of Functions with dlarray Support.

Output Layers.The testsforwardLossIsConsistentInTypeandbackwardLossIsConsistentInTypecheck that the layer functions output variables of the correct data type. The tests check that the layers return consistent data types when given inputs of the data typessingle,double, andgpuArraywith the underlying typessingleordouble.

Test Diagnostic Description Possible Solution
Incorrect type of 'loss' for 'forwardLoss'. The type of the outputlossof theforwardLossfunction must be consistent with the inputY.

Returnlosswith the same type as the inputY.

Incorrect type of the derivative of loss 'dLdY' for 'backwardLoss'. The type of the outputdLdYof the optionalbackwardLossfunction must be consistent with the inputY.

ReturndLdYwith the same type as the inputY.

Tip

If theforwardLossfunction supportsdlarrayobjects, then the software automatically determines the backward loss function and you do not need to specify thebackwardLossfunction. For a list of functions that supportdlarrayobjects, seeList of Functions with dlarray Support.

Correct Gradients

The testgradientsAreNumericallyCorrectchecks that the gradients computed by the layer functions are numerically correct. The testbackwardPropagationDoesNotErrorchecks that the derivatives can be computed using automatic differentiation.

Intermediate Layers.When the optionalbackwardfunction is not specified, the testbackwardPropagationDoesNotErrorchecks that the derivatives can be computed using automatic differentiation. When the optionalbackwardfunction is specified, the testgradientsAreNumericallyCorrecttests that the gradients computed inbackwardare numerically correct.

Test Diagnostic Description Possible Solution
Expected a dlarray with no dimension labels, but instead found labels. When the optionalbackwardfunction is not specified, the layer forward functions must outputdlarrayobjects without dimension labels. Ensure that anydlarrayobjects created in the layer forward functions do not contain dimension labels.
通过层无法向后传播。Check that the 'forward' function fully supports automatic differentiation. Alternatively, implement the 'backward' function manually.

One or more of the following:

  • When the optionalbackwardfunction is not specified, the layer forward functions do not supportdlarrayobjects.

  • When the optionalbackwardfunction is not specified, the tracing of the inputdlarrayobjects in the forward functions have been broken. For example, by using theextractdatafunction.

Check that the forward functions supportdlarrayobjects. For a list of functions that supportdlarrayobjects, seeList of Functions with dlarray Support.

Check that the derivatives of the inputdlarrayobjects can be traced. To learn more about the derivative trace ofdlarrayobjects, seeDerivative Trace.

Alternatively, define a custom backward function by creating a function namedbackward. To learn more, see.

通过层无法向后传播。Check that the 'predict' function fully supports automatic differentiation. Alternatively, implement the 'backward' function manually.
The derivative 'dLdX' for 'backward' is inconsistent with the numerical gradient.

One or more of the following:

  • When the optionalbackwardfunction is specified, the derivative is incorrectly computed

  • The forward functions are non-differentiable at some input points

  • Error tolerance is too small

If the layer forward functions supportdlarrayobjects, then the software automatically determines the backward function and you can omit the backward function. For a list of functions that supportdlarrayobjects, seeList of Functions with dlarray Support.

Check that the derivatives inbackwardare correctly computed.

If the derivatives are correctly computed, then in theFramework Diagnosticsection, manually check the absolute and relative error between the actual and expected values of the derivative.

If the absolute and relative errors are within an acceptable margin of the tolerance, then you can ignore this test diagnostic.

The derivative of the loss with respect to the input 'in1' for 'backward' is inconsistent with the numerical gradient.
The derivative of loss with respect to 'W' for 'backward' is inconsistent with the numerical gradient.

Tip

If the layer forward functions supportdlarrayobjects, then the software automatically determines the backward function and you do not need to specify thebackwardfunction. For a list of functions that supportdlarrayobjects, seeList of Functions with dlarray Support.

Output Layers.When the optionalbackwardLossfunction is not specified, the testbackwardPropagationDoesNotErrorchecks that the derivatives can be computed using automatic differentiation. When the optionalbackwardLossfunction is specified, the testgradientsAreNumericallyCorrecttests that the gradients computed inbackwardLossare numerically correct.

Test Diagnostic Description Possible Solution
Expected a dlarray with no dimension labels, but instead found labels When the optionalbackwardLossfunction is not specified, theforwardLossfunction must outputdlarrayobjects without dimension labels. Ensure that anydlarrayobjects created in theforwardLossfunction does not contain dimension labels.
通过层无法向后传播。Check that the 'forwardLoss' function fully supports automatic differentiation. Alternatively, implement the 'backwardLoss' function manually

One or more of the following:

  • When the optionalbackwardLossfunction is not specified, the layerforwardLossfunction does not supportdlarrayobjects.

  • When the optionalbackwardLossfunction is not specified, the tracing of the inputdlarrayobjects in theforwardLossfunction has been broken. For example, by using theextractdatafunction.

Check that theforwardLossfunction supportsdlarrayobjects. For a list of functions that supportdlarrayobjects, seeList of Functions with dlarray Support.

Check that the derivatives of the inputdlarrayobjects can be traced. To learn more about the derivative trace ofdlarrayobjects, seeDerivative Trace.

Alternatively, define a custom backward loss function by creating a function namedbackwardLoss. To learn more, see.

The derivative 'dLdY' for 'backwardLoss' is inconsistent with the numerical gradient.

One or more of the following:

  • The derivative with respect to the predictionsYis incorrectly computed

  • Function is non-differentiable at some input points

  • Error tolerance is too small

Check that the derivatives inbackwardLossare correctly computed.

If the derivatives are correctly computed, then in theFramework Diagnosticsection, manually check the absolute and relative error between the actual and expected values of the derivative.

If the absolute and relative errors are within an acceptable margin of the tolerance, then you can ignore this test diagnostic.

Tip

If theforwardLossfunction supportsdlarrayobjects, then the software automatically determines the backward loss function and you do not need to specify thebackwardLossfunction. For a list of functions that supportdlarrayobjects, seeList of Functions with dlarray Support.

Valid States

For layers with state properties, the testpredictReturnsValidStateschecks that the predict function returns valid states. Whenforwardis specified, the testforwardReturnsValidStateschecks that the forward function returns valid states. The testresetStateDoesNotErrorchecks that theresetStatefunction returns a layer with valid state properties.

Test Diagnostic Description Possible Solution
Error using 'predict' in Layer. 'State' must be real-values numeric array or unformatted dlarray object. 圣ate outputs must be real-valued numeric arrays or unformatteddlarrayobjects. Ensure that the states identified in theFramework Diagnosticare real-valued numeric arrays or unformatteddlarrayobjects.
Error using 'resetState' in Layer. 'State' must be real-values numeric array or unformatted dlarray object 圣ate properties of returned layer must be real-valued numeric arrays or unformatteddlarrayobjects.

Code Generation Compatibility

如果你年代et theCheckCodegenCompatibilityoption to1(true), then thecheckLayerfunction checks the layer for code generation compatibility.

The testcodegenPragmaDefinedInClassDefchecks that the layer definition contains the code generation pragma%#codegen. The testcheckForSupportedLayerPropertiesForCodegenchecks that the layer properties support code generation. The testpredictIsValidForCodegenerationchecks that the outputs ofpredictare consistent in dimension and batch size.

Code generation supports intermediate layers with 2-D image or feature input only. Code generation does not support layers with state properties (properties with attribute圣ate).

ThecheckLayerfunction does not check that functions used by the layer are compatible with code generation. To check that functions used by the custom layer also support code generation, first use theCode Generation Readinessapp. For more information, seeCheck Code by Using the Code Generation Readiness Tool(MATLAB Coder).

Test Diagnostic Description Possible Solution
Specify '%#codegen' in the class definition of custom layer The layer definition does not include the pragma"%#codegen"for code generation.

Add the%#codegendirective (or pragma) to your layer definition to indicate that you intend to generate code for this layer. Adding this directive instructs the MATLAB Code Analyzer to help you diagnose and fix violations that result in errors during code generation.

Nonscalar layer properties must be type single or double or character array for custom layer The layer contains non-scalar properties of type other than single, double, or character array.

Convert non-scalar properties to use a representation of type single, double, or character array.

For example, convert a categorical array to an array of integers of typedoublerepresenting the categories.

Scalar layer properties must be numeric, logical, or string for custom layer The layer contains scalar properties of type other than numeric, logical, or string.

Convert scalar properties to use a numeric representation, or a representation of type logical or string.

For example, convert a categorical scalar to an integer of typedoublerepresenting the category.

For code generation, 'Z' must have the same number of dimensions as the layer input.

The number of dimensions of the outputZofpredictdoes not match the number of dimensions of the layer inputs.

In thepredictfunction, return the outputs with the same number of dimensions as the layer inputs.

For code generation, 'Z' must have the same batch size as the layer input.

The size of the batch size of the outputZofpredictdoes not match the size of the batch size of the layer inputs.

In thepredictfunction, return the outputs with the batch size as the layer inputs.

See Also

|

Related Topics