featureInputLayer
Feature input layer
Description
A feature input layer inputs feature data to a network and applies data normalization. Use this layer when you have a data set of numeric scalars representing features (data without spatial or time dimensions).
For image input, useimageInputLayer
.
Creation
Description
returns a feature input layer and sets thelayer
= featureInputLayer(numFeatures
)InputSize
property to the specified number of features.
sets the optional properties using name-value pair arguments. You can specify multiple name-value pair arguments. Enclose each property name in single quotes.layer
= featureInputLayer(numFeatures
,Name,Value
)
Properties
Feature Input
InputSize
—Number of features
positive integer
Number of features for each observation in the data, specified as a positive integer.
For image input, useimageInputLayer
.
Example:10
Normalization
—Data normalization
'none'
(default) |'zerocenter'
|'zscore'
|'rescale-symmetric'
|'rescale-zero-one'
|function handle
Data normalization to apply every time data is forward propagated through the input layer, specified as one of the following:
'zerocenter'
— Subtract the mean specified byMean
.'zscore'
— Subtract the mean specified byMean
and divide byStandardDeviation
.'rescale-symmetric'
— Rescale the input to be in the range [-1, 1] using the minimum and maximum values specified byMin
andMax
, respectively.'rescale-zero-one'
——重新输入的范围内(0,1)强g the minimum and maximum values specified byMin
andMax
, respectively.'none'
— Do not normalize the input data.function handle — Normalize the data using the specified function. The function must be of the form
Y = func(X)
, whereX
is the input data and the outputY
is the normalized data.
Tip
The software, by default, automatically calculates the normalization statistics at training time. To save time when training, specify the required statistics for normalization and set the'ResetInputNormalization'
option intrainingOptions
tofalse
.
NormalizationDimension
—Normalization dimension
'auto'
(default) |'channel'
|'all'
Normalization dimension, specified as one of the following:
'auto'
– If the training option isfalse
and you specify any of the normalization statistics (Mean
,StandardDeviation
,Min
, orMax
), then normalize over the dimensions matching the statistics. Otherwise, recalculate the statistics at training time and apply channel-wise normalization.'channel'
– Channel-wise normalization.'all'
– Normalize all values using scalar statistics.
Mean
—Mean for zero-center and z-score normalization
[]
(default) |column vector|numeric scalar
Mean for zero-center and z-score normalization, specified as anumFeatures
-by-1 vector of means per feature, a numeric scalar, or[]
.
If you specify theMean
property, thenNormalization
must be'zerocenter'
or'zscore'
. IfMean
is[]
, then the software calculates the mean at training time.
You can set this property when creating networks without training (for example, when assembling networks usingassembleNetwork
).
Data Types:single
|double
|int8
|int16
|int32
|int64
|uint8
|uint16
|uint32
|uint64
StandardDeviation
—Standard deviation for z-score normalization
[]
(default) |column vector|numeric scalar
Standard deviation for z-score normalization, specified as anumFeatures
-by-1 vector of means per feature, a numeric scalar, or[]
.
If you specify theStandardDeviation
property, thenNormalization
must be'zscore'
. IfStandardDeviation
is[]
, then the software calculates the standard deviation at training time.
You can set this property when creating networks without training (for example, when assembling networks usingassembleNetwork
).
Data Types:single
|double
|int8
|int16
|int32
|int64
|uint8
|uint16
|uint32
|uint64
Min
—Minimum value for rescaling
[]
(default) |column vector|numeric scalar
最小值为尺度改变,作为一个指定numFeatures
-by-1 vector of minima per feature, a numeric scalar, or[]
.
If you specify theMin
property, thenNormalization
must be'rescale-symmetric'
or'rescale-zero-one'
. IfMin
is[]
, then the software calculates the minimum at training time.
You can set this property when creating networks without training (for example, when assembling networks usingassembleNetwork
).
Data Types:single
|double
|int8
|int16
|int32
|int64
|uint8
|uint16
|uint32
|uint64
Max
—Maximum value for rescaling
[]
(default) |column vector|numeric scalar
Maximum value for rescaling, specified as anumFeatures
-by-1 vector of maxima per feature, a numeric scalar, or[]
.
If you specify theMax
property, thenNormalization
must be'rescale-symmetric'
or'rescale-zero-one'
. IfMax
is[]
, then the software calculates the maximum at training time.
You can set this property when creating networks without training (for example, when assembling networks usingassembleNetwork
).
Data Types:single
|double
|int8
|int16
|int32
|int64
|uint8
|uint16
|uint32
|uint64
Layer
Name
—Layer name
''
(default) |character vector|string scalar
Layer name, specified as a character vector or a string scalar. ForLayer
array input, thetrainNetwork
,assembleNetwork
,layerGraph
, anddlnetwork
functions automatically assign names to layers with name''
.
Data Types:char
|string
NumInputs
—Number of inputs
0(default)
Number of inputs of the layer. The layer has no inputs.
Data Types:double
InputNames
—Input names
{}
(default)
Input names of the layer. The layer has no inputs.
Data Types:cell
NumOutputs
—Number of outputs
1
(default)
This property is read-only.
Number of outputs of the layer. This layer has a single output only.
Data Types:double
OutputNames
—Output names
{'out'}
(default)
This property is read-only.
Output names of the layer. This layer has a single output only.
Data Types:cell
Examples
Create Feature Input Layer
Create a feature input layer with the name'input'
for observations consisting of 21 features.
layer = featureInputLayer(21,'Name','input')
layer = FeatureInputLayer with properties: Name: 'input' InputSize: 21 Hyperparameters Normalization: 'none' NormalizationDimension: 'auto'
Include a feature input layer in aLayer
array.
numFeatures = 21; numClasses = 3; layers = [ featureInputLayer(numFeatures,'Name','input') fullyConnectedLayer(numClasses,'Name','fc') softmaxLayer('Name','sm') classificationLayer('Name','classification')]
layers = 4x1 Layer array with layers: 1 'input' Feature Input 21 features 2 'fc' Fully Connected 3 fully connected layer 3 'sm' Softmax softmax 4 'classification' Classification Output crossentropyex
Combine Image and Feature Input Layers
To train a network containing both an image input layer and a feature input layer, you must use adlnetwork
object in a custom training loop.
Define the size of the input image, the number of features of each observation, the number of classes, and the size and number of filters of the convolution layer.
imageInputSize = [28 28 1]; numFeatures = 1; numClasses = 10; filterSize = 5; numFilters = 16;
To create a network with two input layers, you must define the network in two parts and join them, for example, by using a concatenation layer.
Define the first part of the network. Define the image classification layers and include a concatenation layer before the last fully connected layer.
layers = [ imageInputLayer(imageInputSize,'Normalization','none','Name','images') convolution2dLayer(filterSize,numFilters,'Name','conv') reluLayer('Name','relu') fullyConnectedLayer(50,'Name','fc1') concatenationLayer(1,2,'Name','concat') fullyConnectedLayer(numClasses,'Name','fc2') softmaxLayer('Name','softmax')];
Convert the layers to a layer graph.
lgraph = layerGraph(layers);
For the second part of the network, add a feature input layer and connect it to the second input of the concatenation layer.
featInput = featureInputLayer(numFeatures,'Name','features'); lgraph = addLayers(lgraph, featInput); lgraph = connectLayers(lgraph,'features','concat/in2');
Visualize the network.
plot(lgraph)
Create adlnetwork
object.
dlnet = dlnetwork(lgraph)
dlnet = dlnetwork with properties: Layers: [8x1 nnet.cnn.layer.Layer] Connections: [7x2 table] Learnables: [6x3 table] State: [0x3 table] InputNames: {'images' 'features'} OutputNames: {'softmax'} Initialized: 1
Extended Capabilities
GPU Code Generation
Generate CUDA® code for NVIDIA® GPUs using GPU Coder™.
To generate CUDA®or C++ code by using GPU Coder™, you must first construct and train a deep neural network. Once the network is trained and evaluated, you can configure the code generator to generate code and deploy the convolutional neural network on platforms that use NVIDIA®or ARM®GPU processors. For more information, seeDeep Learning with GPU Coder(GPU Coder).
Version History
Apri esempio
如果dispone di una versione modificata di questo esempio. Desideri aprire questo esempio con le tue modifiche?
Comando MATLAB
Hai fatto clic su un collegamento che corrisponde a questo comando MATLAB:
Esegui il comando inserendolo nella finestra di comando MATLAB. I browser web non supportano i comandi MATLAB.
Select a Web Site
Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select:.
You can also select a web site from the following list:
How to Get Best Site Performance
Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.
Americas
- América Latina(Español)
- Canada(English)
- United States(English)
Europe
- Belgium(English)
- 瑞典k(English)
- Deutschland(Deutsch)
- España(Español)
- Finland(English)
- France(Français)
- Ireland(English)
- Italia(Italiano)
- Luxembourg(English)
- Netherlands(English)
- Norway(English)
- Österreich(Deutsch)
- Portugal(English)
- Sweden(English)
- Switzerland
- United Kingdom(English)