Speed up your deep learning applications by training neural networks in the MATLAB®Deep Learning Container, designed to take full advantage of high-performance NVIDIA®GPUs. You can access the MATLAB Deep Learning Container remotely using a web browser or via a VNC connection.
The MATLAB Deep Learning Container contains MATLAB and a range of MATLAB toolboxes that are ideal for deep learning (seeAdditional Information).
This guide helps you run the MATLAB desktop in the cloud on an Amazon EC2®GPU enabled instance. For other cloud service vendors, the required steps are different. The MATLAB Deep Learning Container, a Docker container hosted on NVIDIA GPU Cloud, simplifies the process. The container is available at theNVIDIAGPU Cloud Container Registry.
To use the MATLAB Deep Learning Container, you need:
Amazon®Web Services account.
A MATLAB license that meets the following conditions:
Valid for all the MathWorks®products installed in the container. You can obtain a trial license for products in the MATLAB Deep Learning Container atMATLABTrial for Deep Learning on the Cloud.
Current onSoftware Maintenance Service(短信)。
Linked to aMathWorks Account.
Configured for cloud use. Individual and Campus-Wide licenses are already configured. For other license types, contact your license administrator. You can identify your license type and administrator by viewing yourMathWorks Account. Administrators can consultAdminister Network Licenses.
If you have a Concurrent license type, you must supply the port number and DNS address of the network license manager when you run the container. Add an option of the following form to thedocker run
command when you start the container:
-e MLM_LICENSE_FILE=27000@MyLicenseServer
You are responsible for the cost of the Amazon Web Services used when you create a cluster using this guide. Resource settings, such as instance type, affect the cost of deployment. For cost estimates, see the pricing pages for each AWS service you are using. Prices are subject to change.
If you do not have an Amazon Web Services account, create one athttps://aws.amazon.comby following the on-screen instructions. Create a key pair using the Amazon EC2 Console.
Note
Make sure that you download the private key when you create a pair as it is the only way to connect to the instance as an administrator.
Log in to your Amazon Web Services Console. From the Services menu, select EC2. Click the Launch Instance button.
On the Choose AMI page, navigate to the AWS Marketplace and search for the NVIDIA Deep Learning AMI. This Amazon Machine Image (AMI) is designed for use with NVIDIA GPU Cloud to take advantage of the Volta GPUs available in P3 instances.
Note that not all Availability Zones offer P3 instances. Your Availability Zone is defined during setup of your virtual private cloud (VPC).
On the Configure Instance, Add Storage, and Add Tags pages, configure your instance as needed.
If necessary, choose or create appropriate Security Groups for your instance on the Configure Security Group page.
When correctly configured, select the appropriate key pair option and start your instance. Make sure that you have access to your private key so you can log in to your instance.
Pulling the container downloads the container image onto the Docker host instance, the machine on which the container is to be run. You have to pull the container only once per EC2 instance.
You can copy the pull command for the container image release from theNVIDIA容器注册表. In the Tags section, locate the container image release that you want to run. In the Pull column, click the icon to copy thedocker pull
command. The command is of the form:
docker pull nvcr.io/partners/matlab:r20XYz
r20XYz
must be replaced with the specific MATLAB release name, for exampler2020a
. Ensure the last part of thepull
command matches the MATLAB release you want to use.
Connect to your instance via SSH from your client machine with your private key, using PuTTY or another SSH client. The default username is:
ubuntu@ec2-public-ipv4-address.region.amazonaws.com
Paste thedocker pull
command into your SSH client, and run the command on your EC2 instance. You do not need to log in to the NVIDIA Container Registry to pull the container image.
Running thedocker pull
command downloads the MATLAB container image onto the host EC2 machine. It might take some time to download and extract the large container image.
Run the MATLAB Deep Learning Container using a command of the form:
码头工人运行- - rm - p - p 6080:6080 5901:5901 gpus all --shm-size=512M nvcr.io/partners/matlab:r20XYz
Ensure the last part of therun
command matches the MATLAB release you want to use.
The options-p hostport:containerport
map ports from inside the container to ports on the Docker host so that you can connect to the container desktop. Ports used in the container are5901
(for VNC connection) and6080
(for web browser connection). If you are deploying multiple containers on the same host instance, you must increment the host ports until you find a free port. For example:
-p 5902:5901 -p 6081:6080
The MATLAB Deep Learning Container is now running on your EC2 machine.
There are three ways to access MATLAB in the container:
Use a web browser to connect to the container desktop and run MATLAB desktop
Use VNC to connect to the container desktop and run MATLAB desktop
Run MATLAB using the command-line interface
To connect using a web browser, first set up a tunnel to the container port 6080 (default noVNC port). For more information on how to set up an SSH tunnel, see创造e Encrypted Connection to Remote Applications and Containers. Then, use a URL like the one below to connect to the appropriate port:
http://localhost:6080
Note that you must uselocalhost
and not the name of the host instance.
You will see a login screen for noVNC. Click connect. When you are prompted for a password to access the desktop, use the password:
matlab
You can run MATLAB using the desktop icon. Log in using your MathWorks Account.
If you cannot log in using your MathWorks Account, check that your account is connected to a license that is configured for cloud use. To check, visitLicense Center.
To connect via VNC, first set up a tunnel to the container port 5901 (default VNC port). For more information on how to set up an SSH tunnel, see创造e Encrypted Connection to Remote Applications and Containers. Then, use your VNC client to connect to the appropriate display port on the client:
localhost:1
Note that you must uselocalhost
and not the name of the host instance.
To log in and connect to the container desktop, use the password:
matlab
You can run MATLAB using the desktop icon. Log in using your MathWorks Account.
If you cannot log in using your MathWorks Account, check that your account is connected to a license that is configured for cloud use. To check, visitLicense Center.
You can also run MATLAB from the terminal using the command-line interface using the command:
matlab
Note that there is no graphical desktop in this case.
If you cannot log in using your MathWorks Account, check that your account is connected to a license that is configured for cloud use. To check, visitLicense Center.
MATLAB supports training a single network in parallel using multiple GPUs. To enable multi-GPU training in the MATLAB Deep Learning Container, use thetrainingOptions
function to set'ExecutionEnvironment'
to'multi-gpu'
.
Train your network using thetrainNetwork
function. MATLAB opens a parallel pool of workers on all available GPUs. To select only specific GPUs for training, you can usegpuDevice
. For further information, seeSelect Particular GPUs to Use for Training(Deep Learning Toolbox).
To test your container, you can run the创造e Simple Deep Learning Network for Classification(Deep Learning Toolbox)example. To try this example, double-click the fileMNISTExample.mlx
in the Current Folder pane in the MATLAB startup folder. To run this example on all available GPUs, in thetrainingOptions
function, set the'ExecutionEnvironment'
to'multi-gpu'
.
To close the container session, typeexit
from the container terminal. The container is stopped and removed. No processes or data are saved by default when the container is closed, unless you have saved data in the cloud by mounting cloud storage, as described inShare Data with Containers.
You can configure and customize the behavior of a MathWorks container by setting specific environment variables. For more information, seeConfigure Containers.
NVIDIA GPU Cloud is a Docker repository of containers that are designed to run applications on high-performance NVIDIA GPUs.
The MATLAB Deep Learning Container contains MATLAB and several other toolboxes that are useful in deep learning applications.
Computer Vision Toolbox™
GPU Coder™
Image Processing Toolbox™
MATLAB Coder™
Deep Learning Toolbox™
Parallel Computing Toolbox™
Signal Processing Toolbox™
Statistics and Machine Learning Toolbox™
Text Analytics Toolbox™
To perform deep learning using GPUs in the MATLAB Deep Learning Container, you must have a license valid for MATLAB, Deep Learning Toolbox, and Parallel Computing Toolbox. A license valid for the other products in the container is required to access the full functionality of the container.
If you do not have a license valid for Deep Learning Toolbox or Parallel Computing Toolbox, MATLAB displays a warning on startup indicating that you cannot use these products.
If you do not have a license valid for other products in the MATLAB Deep Learning Container, MATLAB displays a message on startup indicating that you cannot use these products.
You can obtain a trial license for products in the MATLAB Deep Learning Container atMATLAB Trial for Deep Learning on the Cloud.In addition, the container contains severalPretrained Deep Neural Networks(Deep Learning Toolbox).
You can import networks and network architectures into the container from TensorFlow™-Keras and Caffe, with or without layer weights. You can also convert trained networks to the Open Neural Network Exchange (ONNX) model format.
Import from Keras(Deep Learning Toolbox)
Import from Caffe(Deep Learning Toolbox)
The MATLAB Deep Learning Container also contains:
By deploying this software in a container, you can avoid the set-up time needed to install and configure these products. You can run multiple containers to train several networks at once or in different locations with reproducible results.