Main Content

featureInputLayer

Feature input layer

Description

A feature input layer inputs feature data to a network and applies data normalization. Use this layer when you have a data set of numeric scalars representing features (data without spatial or time dimensions).

For image input, useimageInputLayer.

Creation

Description

layer= featureInputLayer(numFeatures)returns a feature input layer and sets theInputSizeproperty to the specified number of features.

example

layer= featureInputLayer(numFeatures,Name,Value)sets the optional properties using name-value pair arguments. You can specify multiple name-value pair arguments. Enclose each property name in single quotes.

Properties

expand all

Feature Input

Number of features for each observation in the data, specified as a positive integer.

For image input, useimageInputLayer.

Example:10

Data normalization to apply every time data is forward propagated through the input layer, specified as one of the following:

  • 'zerocenter'— Subtract the mean specified byMean.

  • 'zscore'— Subtract the mean specified byMeanand divide byStandardDeviation.

  • 'rescale-symmetric'— Rescale the input to be in the range [-1, 1] using the minimum and maximum values specified byMinandMax, respectively.

  • 'rescale-zero-one'——重新输入的范围内(0,1)强g the minimum and maximum values specified byMinandMax, respectively.

  • 'none'— Do not normalize the input data.

  • function handle — Normalize the data using the specified function. The function must be of the formY = func(X), whereXis the input data and the outputYis the normalized data.

Tip

The software, by default, automatically calculates the normalization statistics at training time. To save time when training, specify the required statistics for normalization and set the'ResetInputNormalization'option intrainingOptionstofalse.

Normalization dimension, specified as one of the following:

  • 'auto'– If the training option isfalseand you specify any of the normalization statistics (Mean,StandardDeviation,Min, orMax), then normalize over the dimensions matching the statistics. Otherwise, recalculate the statistics at training time and apply channel-wise normalization.

  • 'channel'– Channel-wise normalization.

  • 'all'– Normalize all values using scalar statistics.

Mean for zero-center and z-score normalization, specified as anumFeatures-by-1 vector of means per feature, a numeric scalar, or[].

If you specify theMeanproperty, thenNormalizationmust be'zerocenter'or'zscore'. IfMeanis[], then the software calculates the mean at training time.

You can set this property when creating networks without training (for example, when assembling networks usingassembleNetwork).

Data Types:single|double|int8|int16|int32|int64|uint8|uint16|uint32|uint64

Standard deviation for z-score normalization, specified as anumFeatures-by-1 vector of means per feature, a numeric scalar, or[].

If you specify theStandardDeviationproperty, thenNormalizationmust be'zscore'. IfStandardDeviationis[], then the software calculates the standard deviation at training time.

You can set this property when creating networks without training (for example, when assembling networks usingassembleNetwork).

Data Types:single|double|int8|int16|int32|int64|uint8|uint16|uint32|uint64

最小值为尺度改变,作为一个指定numFeatures-by-1 vector of minima per feature, a numeric scalar, or[].

If you specify theMinproperty, thenNormalizationmust be'rescale-symmetric'or'rescale-zero-one'. IfMinis[], then the software calculates the minimum at training time.

You can set this property when creating networks without training (for example, when assembling networks usingassembleNetwork).

Data Types:single|double|int8|int16|int32|int64|uint8|uint16|uint32|uint64

Maximum value for rescaling, specified as anumFeatures-by-1 vector of maxima per feature, a numeric scalar, or[].

If you specify theMaxproperty, thenNormalizationmust be'rescale-symmetric'or'rescale-zero-one'. IfMaxis[], then the software calculates the maximum at training time.

You can set this property when creating networks without training (for example, when assembling networks usingassembleNetwork).

Data Types:single|double|int8|int16|int32|int64|uint8|uint16|uint32|uint64

Layer

Layer name, specified as a character vector or a string scalar. ForLayerarray input, thetrainNetwork,assembleNetwork,layerGraph, anddlnetworkfunctions automatically assign names to layers with name''.

Data Types:char|string

Number of inputs of the layer. The layer has no inputs.

Data Types:double

Input names of the layer. The layer has no inputs.

Data Types:cell

This property is read-only.

Number of outputs of the layer. This layer has a single output only.

Data Types:double

This property is read-only.

Output names of the layer. This layer has a single output only.

Data Types:cell

Examples

collapse all

Create a feature input layer with the name'input'for observations consisting of 21 features.

layer = featureInputLayer(21,'Name','input')
layer = FeatureInputLayer with properties: Name: 'input' InputSize: 21 Hyperparameters Normalization: 'none' NormalizationDimension: 'auto'

Include a feature input layer in aLayerarray.

numFeatures = 21; numClasses = 3; layers = [ featureInputLayer(numFeatures,'Name','input') fullyConnectedLayer(numClasses,'Name','fc') softmaxLayer('Name','sm') classificationLayer('Name','classification')]
layers = 4x1 Layer array with layers: 1 'input' Feature Input 21 features 2 'fc' Fully Connected 3 fully connected layer 3 'sm' Softmax softmax 4 'classification' Classification Output crossentropyex

To train a network containing both an image input layer and a feature input layer, you must use adlnetworkobject in a custom training loop.

Define the size of the input image, the number of features of each observation, the number of classes, and the size and number of filters of the convolution layer.

imageInputSize = [28 28 1]; numFeatures = 1; numClasses = 10; filterSize = 5; numFilters = 16;

To create a network with two input layers, you must define the network in two parts and join them, for example, by using a concatenation layer.

Define the first part of the network. Define the image classification layers and include a concatenation layer before the last fully connected layer.

layers = [ imageInputLayer(imageInputSize,'Normalization','none','Name','images') convolution2dLayer(filterSize,numFilters,'Name','conv') reluLayer('Name','relu') fullyConnectedLayer(50,'Name','fc1') concatenationLayer(1,2,'Name','concat') fullyConnectedLayer(numClasses,'Name','fc2') softmaxLayer('Name','softmax')];

Convert the layers to a layer graph.

lgraph = layerGraph(layers);

For the second part of the network, add a feature input layer and connect it to the second input of the concatenation layer.

featInput = featureInputLayer(numFeatures,'Name','features'); lgraph = addLayers(lgraph, featInput); lgraph = connectLayers(lgraph,'features','concat/in2');

Visualize the network.

plot(lgraph)

Figure contains an axes object. The axes object contains an object of type graphplot.

Create adlnetworkobject.

dlnet = dlnetwork(lgraph)
dlnet = dlnetwork with properties: Layers: [8x1 nnet.cnn.layer.Layer] Connections: [7x2 table] Learnables: [6x3 table] State: [0x3 table] InputNames: {'images' 'features'} OutputNames: {'softmax'} Initialized: 1

Extended Capabilities

Version History

Introduced in R2020b