Main Content

Deep Learning Custom Training Loops

Customize deep learning training loops and loss functions

If thetrainingOptionsfunction does not provide the training options that you need for your task, or custom output layers do not support the loss functions that you need, then you can define a custom training loop. For networks that cannot be created using layer graphs, you can define custom networks as a function. To learn more, seeDefine Custom Training Loops, Loss Functions, and Networks.

Functions

expand all

dlnetwork 深度定制traini的学习网络ng loops
forward Compute deep learning network output for training
predict Compute deep learning network output for inference
adamupdate Update parameters using adaptive moment estimation (Adam)
rmspropupdate Update parameters using root mean squared propagation (RMSProp)
sgdmupdate Update parameters using stochastic gradient descent with momentum (SGDM)
dlupdate Update parameters using custom function
minibatchqueue Create mini-batches for deep learning
onehotencode Encode data labels into one-hot vectors
onehotdecode Decode probability vectors into class labels
padsequences Pad or truncate sequence data to same length
initialize Initialize learnable and state parameters of adlnetwork
resetState Reset state parameters of neural network
dlarray Deep learning array for custom training loops
dlgradient Compute gradients for custom training loops using automatic differentiation
dlfeval Evaluate deep learning model for custom training loops
dims Dimension labels ofdlarray
finddim Find dimensions with specified label
stripdims Removedlarraydata format
extractdata Extract data fromdlarray
isdlarray Check if object isdlarray
functionToLayerGraph Convert deep learning model function to a layer graph
dlconv Deep learning convolution
dltranspconv Deep learning transposed convolution
lstm Long short-term memory
gru Gated recurrent unit
embed Embed discrete data
fullyconnect Sum all weighted input data and apply a bias
dlode45 Deep learning solution of nonstiff ordinary differential equation (ODE)
relu Apply rectified linear unit activation
leakyrelu Apply leaky rectified linear unit activation
batchnorm Normalize data across all observations for each channel independently
crosschannelnorm Cross channel square-normalize using local responses
groupnorm Normalize data across grouped subsets of channels for each observation independently
instancenorm Normalize across each channel for each observation independently
layernorm Normalize data across all channels for each observation independently
avgpool Pool data to average values over spatial dimensions
maxpool Pool data to maximum value
maxunpool Unpool the output of a maximum pooling operation
softmax Apply softmax activation to channel dimension
sigmoid Apply sigmoid activation
sigmoid Apply sigmoid activation
crossentropy Cross-entropy loss for classification tasks
l1loss L1loss for regression tasks
l2loss L2loss for regression tasks
huber Huber loss for regression tasks
mse Half mean squared error
ctc Connectionist temporal classification (CTC) loss for unaligned sequence classification
dlaccelerate Accelerate deep learning function for custom training loops
AcceleratedFunction Accelerated deep learning function
clearCache Clear accelerated deep learning function trace cache

Topics

Custom Training Loops

Model Functions

Automatic Differentiation

Deep Learning Function Acceleration