主要内容

trainSoftmaxLayer

Train a softmax layer for classification

Description

example

= trainSoftmaxLayer(X,T)trains a softmax layer,, on the input dataX和the targetsT.

= trainSoftmaxLayer(X,T,Name,Value)trains a softmax layer,,带有一个或多个指定的其他选项Name,Value配对参数。

例如,您可以指定损失功能。

Examples

collapse all

加载样本数据。

[X,T] = iris_dataset;

X是虹膜花的四个属性的4x150矩阵:萼片长度,萼片宽度,花瓣长,花瓣宽度。

T是关联类向量的3x150矩阵,该矩阵定义了每个输入中的三个类中的哪个。每一行都对应一个代表虹膜物种之一(类)的虚拟变量。在每一列中,三个行中的一个1代表特定样本(观察或示例)属于的类。其他类别不属于的其他类别的行中的行中为零。

Train a softmax layer using the sample data.

网= trainSoftmaxLayer(X,T);

使用训练有素的软磁层将观测值分类为三个类之一。

Y = net(X);

Plot the confusion matrix using the targets and the classifications obtained from the softmax layer.

plotconfusion(T,Y);

Input Arguments

collapse all

Training data, specified as anm-by-n矩阵,哪里m是个number of variables in training data, andn是个number of observations (examples). Hence, each column ofXrepresents a sample.

Data Types:single|double

Target data, specified as ak-by-n矩阵,哪里k是个number of classes, andn是个number of observations. Each row is a dummy variable representing a particular class. In other words, each column represents a sample, and all entries of a column are zero except for a single one in a row. This single entry indicates the class for that sample.

Data Types:single|double

Name-Value Arguments

将可选的参数对Name1=Value1,...,NameN=ValueN, whereName是个argument name and价值是个corresponding value. Name-value arguments must appear after other arguments, but the order of the pairs does not matter.

Before R2021a, use commas to separate each name and value, and encloseNamein quotes.

Example:“MaxEpochs”,400,'ShowProgressWindow',falsespecifies the maximum number of iterations as 400 and hides the training window.

最大训练迭代次数, specified as the comma-separated pair consisting of“MaxEpochs”和积极的整数价值。

Example:“ maxepochs”,500

Data Types:single|double

SoftMax层的损失功能, specified as the comma-separated pair consisting of'LossFunction'和either“ Crossentropy”或者'mse'.

msestands for mean squared error function, which is given by:

E = 1 n j = 1 n i = 1 k ( t i j y i j ) 2 ,

wheren是个number of training examples, andk是个number of classes. t i j 是个ijth entry of the target matrix,T, 和 y i j 是个ith output from the autoencoder when the input vector isxj.

The cross entropy function is given by:

E = 1 n j = 1 n i = 1 k t i j ln y i j + ( 1 t i j ) ln ( 1 y i j ) .

Example:'LossFunction','mse'

Indicator to display the training window during training, specified as the comma-separated pair consisting of'ShowProgressWindow'和either真的或者false.

Example:'ShowProgressWindow',false

Data Types:logical

培训algorithm used to train the softmax layer, specified as the comma-separated pair consisting of'TrainingAlgorithm''Trainscg',代表缩放的共轭梯度。

Example:'TrainingAlgorithm','trainscg'

Output Arguments

collapse all

用于分类的SoftMax层,返回为网络object. The softmax layer,, is the same size as the targetT.

Version History

在R2015B中引入