If you create a custom deep learning layer, then you can use thecheckLayer
function to check that the layer is valid. The function checks layers for validity, GPU compatibility, correctly defined gradients, and code generation compatibility. To check that a layer is valid, run the following command:
checkLayer(layer,validInputSize)
layer
is an instance of the layer,validInputSize
is a vector or cell array specifying the valid input sizes to the layer. To check with multiple observations, use theObservationDimension
选择。To check for code generation compatibility, set theCheckCodegenCompatibility
option to1
(true).For large input sizes, the gradient checks take longer to run. To speed up the tests, specify a smaller valid input size.
Check the validity of the example custom layerpreluLayer
.
The custom layerpreluLayer
, attached to this is example as a supporting file, applies the PReLU operation to the input data. To access this layer, open this example as a live script.
Create an instance of the layer and check that it is valid usingcheckLayer
. Set the valid input size to the typical size of a single observation input to the layer. For a single input, the layer expects observations of sizeh-by-w-by-c, whereh,w, andcare the height, width, and number of channels of the previous layer output, respectively.
SpecifyvalidInputSize
as the typical size of an input array.
layer = preluLayer(20); validInputSize = [5 5 20]; checkLayer(layer,validInputSize)
Skipping multi-observation tests. To enable tests with multiple observations, specify the 'ObservationDimension' option. For 2-D image data, set 'ObservationDimension' to 4. For 3-D image data, set 'ObservationDimension' to 5. For sequence data, set 'ObservationDimension' to 2. Skipping GPU tests. No compatible GPU device found. Skipping code generation compatibility tests. To check validity of the layer for code generation, specify the 'CheckCodegenCompatibility' and 'ObservationDimension' options. Running nnet.checklayer.TestLayerWithoutBackward .......... .. Done nnet.checklayer.TestLayerWithoutBackward __________ Test Summary: 12 Passed, 0 Failed, 0 Incomplete, 16 Skipped. Time elapsed: 0.18741 seconds.
The results show the number of passed, failed, and skipped tests. If you do not specify theObservationsDimension
option, or do not have a GPU, then the function skips the corresponding tests.
Check Multiple Observations
For multi-observation input, the layer expects an array of observations of sizeh-by-w-by-c-by-N, whereh,w, andcare the height, width, and number of channels, respectively, andNis the number of observations.
To check the layer validity for multiple observations, specify the typical size of an observation and set theObservationDimension
option to 4.
layer = preluLayer(20); validInputSize = [5 5 20]; checkLayer(layer,validInputSize,ObservationDimension=4)
Skipping GPU tests. No compatible GPU device found. Skipping code generation compatibility tests. To check validity of the layer for code generation, specify the 'CheckCodegenCompatibility' and 'ObservationDimension' options. Running nnet.checklayer.TestLayerWithoutBackward .......... ........ Done nnet.checklayer.TestLayerWithoutBackward __________ Test Summary: 18 Passed, 0 Failed, 0 Incomplete, 10 Skipped. Time elapsed: 0.19972 seconds.
In this case, the function does not detect any issues with the layer.
ThecheckLayer
function checks the validity of a custom layer by performing a series of tests.
ThecheckLayer
function uses these tests to check the validity of custom intermediate layers (layers of typennet.layer.Layer
).
Test | Description |
---|---|
functionSyntaxesAreCorrect |
The syntaxes of the layer functions are correctly defined. |
predictDoesNotError |
predict function does not error. |
forwardDoesNotError |
When specified, the |
forwardPredictAreConsistentInSize |
When |
backwardDoesNotError |
When specified,backward does not error. |
backwardIsConsistentInSize |
When
|
predictIsConsistentInType |
The outputs of |
forwardIsConsistentInType |
When |
backwardIsConsistentInType |
When |
gradientsAreNumericallyCorrect |
Whenbackward is specified, the gradients computed inbackward are consistent with the numerical gradients. |
backwardPropagationDoesNotError |
Whenbackward is not specified, the derivatives can be computed using automatic differentiation. |
predictReturnsValidStates |
For layers with state properties, thepredict function returns valid states. |
forwardReturnsValidStates |
For layers with state properties, theforward function, if specified, returns valid states. |
resetStateDoesNotError |
For layers with state properties, theresetState function, if specified, does not error and resets the states to valid states. |
codegenPragmaDefinedInClassDef |
The pragma"%#codegen" for code generation is specified in class file. |
checkForSupportedLayerPropertiesForCodegen |
The layer properties support code generation. |
predictIsValidForCodeGeneration |
predict is valid for code generation. |
doesNotHaveStateProperties |
For code generation, the layer does not have state properties. |
supportedFunctionLayer |
For code generation, the layer is not aFunctionLayer object. |
Some tests run multiple times. These tests also check different data types and for GPU compatibility:
predictIsConsistentInType
forwardIsConsistentInType
backwardIsConsistentInType
To execute the layer functions on a GPU, the functions must support inputs and outputs of typegpuArray
与底层数据类型single
.
ThecheckLayer
function uses these tests to check the validity of custom output layers (layers of typennet.layer.ClassificationLayer
ornnet.layer.RegressionLayer
).
Test | Description |
---|---|
forwardLossDoesNotError |
forwardLoss does not error. |
backwardLossDoesNotError |
backwardLoss does not error. |
forwardLossIsScalar |
The output offorwardLoss is scalar. |
backwardLossIsConsistentInSize |
WhenbackwardLoss is specified, the output ofbackwardLoss is consistent in size:dLdY is the same size as the predictionsY . |
forwardLossIsConsistentInType |
The output of |
backwardLossIsConsistentInType |
When |
gradientsAreNumericallyCorrect |
WhenbackwardLoss is specified, the gradients computed inbackwardLoss are numerically correct. |
backwardPropagationDoesNotError |
WhenbackwardLoss is not specified, the derivatives can be computed using automatic differentiation. |
TheforwardLossIsConsistentInType
andbackwardLossIsConsistentInType
tests also check for GPU compatibility. To execute the layer functions on a GPU, the functions must support inputs and outputs of typegpuArray
与底层数据类型single
.
To check the layer validity, thecheckLayer
function generates data depending on the type of layer:
Layer Type | Description of Generated Data |
---|---|
Intermediate | Values in the range [-1,1] |
回归输出 | Predictions and targets with values in the range [-1,1] |
Classification output | Predictions with values in the range [0,1]. 如果你年代pecify the If you do not specify the |
To check for multiple observations, specify the observation dimension using theObservationDimension
选择。如果你年代pecify the observation dimension, then thecheckLayer
function checks that the layer functions are valid using generated data with mini-batches of size 1 and 2. If you do not specify this name-value pair, then the function skips the tests that check that the layer functions are valid for multiple observations.
If a test fails when you usecheckLayer
, then the function provides a test diagnostic and a framework diagnostic. The test diagnostic highlights any issues found with the layer. The framework diagnostic provides more detailed information.
The testfunctionSyntaxesAreCorrect
checks that the layer functions have correctly defined syntaxes.
Test Diagnostic | Description | Possible Solution |
---|---|---|
Incorrect number of input arguments for 'predict' in Layer . |
The syntax for thepredict function is not consistent with the number of layer inputs. |
Specify the correct number of input and output arguments in The
You can adjust the syntaxes for layers with multiple inputs, multiple outputs, or multiple state parameters:
Tip If the number of inputs to the layer can vary, then use If the number of outputs can vary, then use Tip If the custom layer has a |
Incorrect number of output arguments for 'predict' in Layer |
The syntax for thepredict function is not consistent with the number of layer outputs. |
|
Incorrect number of input arguments for 'forward' in Layer |
The syntax for the optionalforward function is not consistent with the number of layer inputs. |
Specify the correct number of input and output arguments in The
You can adjust the syntaxes for layers with multiple inputs, multiple outputs, or multiple state parameters:
Tip If the number of inputs to the layer can vary, then use If the number of outputs can vary, then use Tip If the custom layer has a |
Incorrect number of output arguments for 'forward' in Layer |
The syntax for the optionalforward function is not consistent with the number of layer outputs. |
|
Incorrect number of input arguments for 'backward' in Layer |
The syntax for the optionalbackward function is not consistent with the number of layer inputs and outputs. |
Specify the correct number of input and output arguments in The
You can adjust the syntaxes for layers with multiple inputs, multiple outputs, multiple learnable parameters, or multiple state parameters:
To reduce memory usage by preventing unused variables being saved between the forward and backward pass, replace the corresponding input arguments with Tip If the number of inputs to If the number of outputs can vary, then use Tip If the layer forward functions support |
Incorrect number of output arguments for 'backward' in Layer |
The syntax for the optionalbackward function is not consistent with the number of layer outputs. |
For layers with multiple inputs or outputs, you must set the values of the layer propertiesNumInputs
(or alternatively,InputNames
) andNumOutputs
(or alternatively,OutputNames
) in the layer constructor function, respectively.
ThecheckLayer
function checks that the layer functions are valid for single and multiple observations.To check for multiple observations, specify the observation dimension using theObservationDimension
选择。如果你年代pecify the observation dimension, then thecheckLayer
function checks that the layer functions are valid using generated data with mini-batches of size 1 and 2. If you do not specify this name-value pair, then the function skips the tests that check that the layer functions are valid for multiple observations.
Test Diagnostic | Description | Possible Solution |
---|---|---|
Skipping multi-observation tests. To enable checks with multiple observations, specify the 'ObservationDimension' parameter in checkLayer . |
If you do not specify the'ObservationDimension' parameter incheckLayer , then the function skips the tests that check data with multiple observations. |
Use the command For more information, seeLayer Input Sizes. |
These tests check that the layers do not error when passed input data of valid size.
Intermediate Layers.The testspredictDoesNotError
,forwardDoesNotError
, andbackwardDoesNotError
check that the layer functions do not error when passed inputs of valid size. If you specify an observation dimension, then the function checks the layer for both a single observation and multiple observations.
Test Diagnostic | Description | Possible Solution |
---|---|---|
The function 'predict' threw an error: |
Thepredict function errors when passed data of sizevalidInputSize . |
Address the error described in the Tip If the layer forward functions support |
The function 'forward' threw an error: |
The optionalforward function errors when passed data of sizevalidInputSize . |
|
The function 'backward' threw an error: |
The optionalbackward function errors when passed the output ofpredict . |
Output Layers.The testsforwardLossDoesNotError
andbackwardLossDoesNotError
check that the layer functions do not error when passed inputs of valid size. If you specify an observation dimension, then the function checks the layer for both a single observation and multiple observations.
Test Diagnostic | Description | Possible Solution |
---|---|---|
The function 'forwardLoss' threw an error: |
TheforwardLoss function errors when passed data of sizevalidInputSize . |
Address the error described in the Tip If the |
The function 'backwardLoss' threw an error: |
The optionalbackwardLoss function errors when passed data of sizevalidInputSize . |
These tests check that the layer function outputs are consistent in size.
Intermediate Layers.The testbackwardIsConsistentInSize
checks that thebackward
function outputs derivatives of the correct size.
Thebackward
function syntax depends on the type of layer.
dLdX = backward(layer,X,Z,dLdZ,memory)
returns the derivativesdLdX
of the loss with respect to the layer input, wherelayer
has a single input and a single output.Z
corresponds to the forward function output anddLdZ
对应的导数与res损失pect toZ
. The function inputmemory
corresponds to the memory output of the forward function.
[dLdX,dLdW] = backward(layer,X,Z,dLdZ,memory)
also returns the derivativedLdW
of the loss with respect to the learnable parameter, wherelayer
has a single learnable parameter.
[dLdX,dLdSin] = backward(layer,X,Z,dLdZ,dLdSout,memory)
also returns the derivativedLdSin
of the loss with respect to the state input using any of the previous syntaxes, wherelayer
has a single state parameter anddLdSout
对应的导数与res损失pect to the layer state output.
[dLdX,dLdW,dLdSin] = backward(layer,X,Z,dLdZ,dLdSout,memory)
also returns the derivativedLdW
of the loss with respect to the learnable parameter and returns the derivativedLdSin
of the loss with respect to the layer state input using any of the previous syntaxes, wherelayer
has a single state parameter and single learnable parameter.
You can adjust the syntaxes for layers with multiple inputs, multiple outputs, multiple learnable parameters, or multiple state parameters:
For layers with multiple inputs, replaceX
anddLdX
withX1,...,XN
anddLdX1,...,dLdXN
, respectively, whereN
is the number of inputs.
For layers with multiple outputs, replaceZ
anddLdZ
withZ1,...,ZM
anddLdZ1,...,dLdZM
, respectively, whereM
is the number of outputs.
For layers with multiple learnable parameters, replacedLdW
withdLdW1,...,dLdWP
, whereP
is the number of learnable parameters.
For layers with multiple state parameters, replacedLdSin
anddLdSout
withdLdSin1,...,dLdSinK
anddLdSout1,...,dLdSoutK
, respectively, whereK
is the number of state parameters.
To reduce memory usage by preventing unused variables being saved between the forward and backward pass, replace the corresponding input arguments with~
.
Tip
If the number of inputs tobackward
可以各有不同,n usevarargin
instead of the input arguments afterlayer
. In this case,varargin
是一个单元阵列的输入,在哪里the firstN
elements correspond to theN
layer inputs, the nextM
elements correspond to theM
layer outputs, the nextM
elements correspond to the derivatives of the loss with respect to theM
layer outputs, the nextK
elements correspond to theK
derivatives of the loss with respect to theK
states outputs, and the last element corresponds tomemory
.
If the number of outputs can vary, then usevarargout
instead of the output arguments. In this case,varargout
is a cell array of the outputs, where the firstN
elements correspond to theN
the derivatives of the loss with respect to theN
layer inputs, the nextP
elements correspond to the derivatives of the loss with respect to theP
learnable parameters, and the nextK
elements correspond to the derivatives of the loss with respect to theK
state inputs.
The derivativesdLdX1
, …,dLdXn
must be the same size as the corresponding layer inputs, anddLdW1,…,dLdWk
must be the same size as the corresponding learnable parameters. The sizes must be consistent for input data with single and multiple observations.
Test Diagnostic | Description | Possible Solution |
---|---|---|
Incorrect size of 'dLdX' for 'backward' . |
The derivatives of the loss with respect to the layer inputs must be the same size as the corresponding layer input. | Return the derivatives |
Incorrect size of the derivative of the loss with respect to the input 'in1' for 'backward' |
||
The size of 'Z' returned from 'forward' must be the same as for 'predict' . |
The outputs ofpredict must be the same size as the corresponding outputs offorward . |
Return the outputs |
Incorrect size of the derivative of the loss with respect to 'W' for 'backward' . |
The derivatives of the loss with respect to the learnable parameters must be the same size as the corresponding learnable parameters. | Return the derivatives |
Tip
If the layer forward functions supportdlarray
objects, then the software automatically determines the backward function and you do not need to specify thebackward
function. For a list of functions that supportdlarray
objects, seeList of Functions with dlarray Support.
Output Layers.The testforwardLossIsScalar
checks that the output of theforwardLoss
function is scalar. When thebackwardLoss
function is specified, the testbackwardLossIsConsistentInSize
checks that the outputs offorwardLoss
andbackwardLoss
是正确的大小。
The syntax forforwardLoss
isloss = forwardLoss(layer,Y,T)
. The inputY
corresponds to the predictions made by the network. These predictions are the output of the previous layer. The inputT
corresponds to the training targets. The outputloss
is the loss betweenY
andT
according to the specified loss function. The outputloss
must be scalar.
If theforwardLoss
function supportsdlarray
objects, then the software automatically determines the backward loss function and you do not need to specify thebackwardLoss
function. For a list of functions that supportdlarray
objects, seeList of Functions with dlarray Support.
The syntax forbackwardLoss
isdLdY = backwardLoss(layer,Y,T)
. The inputY
contains the predictions made by the network andT
contains the training targets. The outputdLdY
is the derivative of the loss with respect to the predictionsY
. The outputdLdY
must be the same size as the layer inputY
.
Test Diagnostic | Description | Possible Solution |
---|---|---|
Incorrect size of 'loss' for 'forwardLoss' . |
The outputloss offorwardLoss must be a scalar. |
Return the output |
Incorrect size of the derivative of loss 'dLdY' for 'backwardLoss' . |
WhenbackwardLoss is specified, the derivatives of the loss with respect to the layer input must be the same size as the layer input. |
Return derivative If the |
These tests check that the layer function outputs are consistent in type and that the layer functions are GPU compatible.
If the layer forward functions fully supportdlarray
objects, then the layer is GPU compatible. Otherwise, to be GPU compatible, the layer functions must support inputs and return outputs of typegpuArray
(Parallel Computing Toolbox).
Many MATLAB®built-in functions supportgpuArray
(Parallel Computing Toolbox)anddlarray
input arguments. For a list of functions that supportdlarray
objects, seeList of Functions with dlarray Support. For a list of functions that execute on a GPU, seeRun MATLAB Functions on a GPU(Parallel Computing Toolbox).To use a GPU for deep learning, you must also have a supported GPU device. For information on supported devices, seeGPU Support by Release(Parallel Computing Toolbox).For more information on working with GPUs in MATLAB, seeGPU Computing in MATLAB(Parallel Computing Toolbox).
Intermediate Layers.The testspredictIsConsistentInType
,forwardIsConsistentInType
, andbackwardIsConsistentInType
check that the layer functions output variables of the correct data type. The tests check that the layer functions return consistent data types when given inputs of the data typessingle
,double
, andgpuArray
with the underlying typessingle
ordouble
.
Tip
If you preallocate arrays using functions such aszeros
, then you must ensure that the data types of these arrays are consistent with the layer function inputs. To create an array of zeros of the same data type as another array, use the"like"
option ofzeros
. For example, to initialize an array of zeros of sizesz
with the same data type as the arrayX
, useZ = zeros(sz,"like",X)
.
Test Diagnostic | Description | Possible Solution |
---|---|---|
Incorrect type of 'Z' for 'predict' . |
The types of the outputsZ1,…,Zm of thepredict function must be consistent with the inputsX1,…,Xn . |
Return the outputs |
Incorrect type of output 'out1' for 'predict' . |
||
Incorrect type of 'Z' for 'forward' . |
The types of the outputsZ1,…,Zm of the optionalforward function must be consistent with the inputsX1,…,Xn . |
|
Incorrect type of output 'out1' for 'forward' . |
||
Incorrect type of 'dLdX' for 'backward' . |
The types of the derivativesdLdX1,…,dLdXn of the optionalbackward function must be consistent with the inputsX1,…,Xn . |
Return the derivatives |
Incorrect type of the derivative of the loss with respect to the input 'in1' for 'backward' . |
||
Incorrect type of the derivative of loss with respect to 'W' for 'backward' . |
The type of the derivative of the loss of the learnable parameters must be consistent with the corresponding learnable parameters. | For each learnable parameter, return the derivative with the same type as the corresponding learnable parameter. |
Tip
If the layer forward functions supportdlarray
objects, then the software automatically determines the backward function and you do not need to specify thebackward
function. For a list of functions that supportdlarray
objects, seeList of Functions with dlarray Support.
Output Layers.The testsforwardLossIsConsistentInType
andbackwardLossIsConsistentInType
check that the layer functions output variables of the correct data type. The tests check that the layers return consistent data types when given inputs of the data typessingle
,double
, andgpuArray
with the underlying typessingle
ordouble
.
Test Diagnostic | Description | Possible Solution |
---|---|---|
Incorrect type of 'loss' for 'forwardLoss' . |
The type of the outputloss of theforwardLoss function must be consistent with the inputY . |
Return |
Incorrect type of the derivative of loss 'dLdY' for 'backwardLoss' . |
The type of the outputdLdY of the optionalbackwardLoss function must be consistent with the inputY . |
Return |
Tip
If theforwardLoss
function supportsdlarray
objects, then the software automatically determines the backward loss function and you do not need to specify thebackwardLoss
function. For a list of functions that supportdlarray
objects, seeList of Functions with dlarray Support.
The testgradientsAreNumericallyCorrect
checks that the gradients computed by the layer functions are numerically correct. The testbackwardPropagationDoesNotError
checks that the derivatives can be computed using automatic differentiation.
Intermediate Layers.When the optionalbackward
function is not specified, the testbackwardPropagationDoesNotError
checks that the derivatives can be computed using automatic differentiation. When the optionalbackward
function is specified, the testgradientsAreNumericallyCorrect
tests that the gradients computed inbackward
are numerically correct.
Test Diagnostic | Description | Possible Solution |
---|---|---|
Expected a dlarray with no dimension labels, but instead found labels . |
When the optionalbackward function is not specified, the layer forward functions must outputdlarray objects without dimension labels. |
Ensure that anydlarray objects created in the layer forward functions do not contain dimension labels. |
通过层无法向后传播。Check that the 'forward' function fully supports automatic differentiation. Alternatively, implement the 'backward' function manually . |
One or more of the following:
|
Check that the forward functions support Check that the derivatives of the input Alternatively, define a custom backward function by creating a function named |
通过层无法向后传播。Check that the 'predict' function fully supports automatic differentiation. Alternatively, implement the 'backward' function manually . |
||
The derivative 'dLdX' for 'backward' is inconsistent with the numerical gradient . |
One or more of the following:
|
If the layer forward functions support Check that the derivatives in If the derivatives are correctly computed, then in the If the absolute and relative errors are within an acceptable margin of the tolerance, then you can ignore this test diagnostic. |
The derivative of the loss with respect to the input 'in1' for 'backward' is inconsistent with the numerical gradient . |
||
The derivative of loss with respect to 'W' for 'backward' is inconsistent with the numerical gradient . |
Tip
If the layer forward functions supportdlarray
objects, then the software automatically determines the backward function and you do not need to specify thebackward
function. For a list of functions that supportdlarray
objects, seeList of Functions with dlarray Support.
Output Layers.When the optionalbackwardLoss
function is not specified, the testbackwardPropagationDoesNotError
checks that the derivatives can be computed using automatic differentiation. When the optionalbackwardLoss
function is specified, the testgradientsAreNumericallyCorrect
tests that the gradients computed inbackwardLoss
are numerically correct.
Test Diagnostic | Description | Possible Solution |
---|---|---|
Expected a dlarray with no dimension labels, but instead found labels |
When the optionalbackwardLoss function is not specified, theforwardLoss function must outputdlarray objects without dimension labels. |
Ensure that anydlarray objects created in theforwardLoss function does not contain dimension labels. |
通过层无法向后传播。Check that the 'forwardLoss' function fully supports automatic differentiation. Alternatively, implement the 'backwardLoss' function manually |
One or more of the following:
|
Check that the Check that the derivatives of the input Alternatively, define a custom backward loss function by creating a function named |
The derivative 'dLdY' for 'backwardLoss' is inconsistent with the numerical gradient . |
One or more of the following:
|
Check that the derivatives in If the derivatives are correctly computed, then in the If the absolute and relative errors are within an acceptable margin of the tolerance, then you can ignore this test diagnostic. |
Tip
If theforwardLoss
function supportsdlarray
objects, then the software automatically determines the backward loss function and you do not need to specify thebackwardLoss
function. For a list of functions that supportdlarray
objects, seeList of Functions with dlarray Support.
For layers with state properties, the testpredictReturnsValidStates
checks that the predict function returns valid states. Whenforward
is specified, the testforwardReturnsValidStates
checks that the forward function returns valid states. The testresetStateDoesNotError
checks that theresetState
function returns a layer with valid state properties.
Test Diagnostic | Description | Possible Solution |
---|---|---|
Error using 'predict' in Layer. 'State' must be real-values numeric array or unformatted dlarray object . |
圣ate outputs must be real-valued numeric arrays or unformatteddlarray objects. |
Ensure that the states identified in theFramework Diagnostic are real-valued numeric arrays or unformatteddlarray objects. |
Error using 'resetState' in Layer. 'State' must be real-values numeric array or unformatted dlarray object |
圣ate properties of returned layer must be real-valued numeric arrays or unformatteddlarray objects. |
如果你年代et theCheckCodegenCompatibility
option to1
(true), then thecheckLayer
function checks the layer for code generation compatibility.
The testcodegenPragmaDefinedInClassDef
checks that the layer definition contains the code generation pragma%#codegen
. The testcheckForSupportedLayerPropertiesForCodegen
checks that the layer properties support code generation. The testpredictIsValidForCodegeneration
checks that the outputs ofpredict
are consistent in dimension and batch size.
Code generation supports intermediate layers with 2-D image or feature input only. Code generation does not support layers with state properties (properties with attribute圣ate
).
ThecheckLayer
function does not check that functions used by the layer are compatible with code generation. To check that functions used by the custom layer also support code generation, first use theCode Generation Readinessapp. For more information, seeCheck Code by Using the Code Generation Readiness Tool(MATLAB Coder).
Test Diagnostic | Description | Possible Solution |
---|---|---|
Specify '%#codegen' in the class definition of custom layer |
The layer definition does not include the pragma"%#codegen" for code generation. |
Add the |
Nonscalar layer properties must be type single or double or character array for custom layer |
The layer contains non-scalar properties of type other than single, double, or character array. | Convert non-scalar properties to use a representation of type single, double, or character array. For example, convert a categorical array to an array of integers of type |
Scalar layer properties must be numeric, logical, or string for custom layer |
The layer contains scalar properties of type other than numeric, logical, or string. | Convert scalar properties to use a numeric representation, or a representation of type logical or string. For example, convert a categorical scalar to an integer of type |
For code generation, 'Z' must have the same number of dimensions as the layer input . |
The number of dimensions of the output |
In the |
For code generation, 'Z' must have the same batch size as the layer input . |
The size of the batch size of the output |
In the |