主要内容

Setting Up the Prerequisite Products

将GPU CODER™用于CUDA®代码生成,安装指定的产品下载188bet金宝搏Installing Prerequisite Products

MEX设置

When generating CUDA MEX with GPU Coder, the code generator uses the NVIDIA®MATLAB包含的编译器和库®。根据开发计算机上的操作系统,您只需要设置MEX代码生成器即可。

视窗系统

If you have multiple versions ofMicrosoft®Visual Studio®窗口上安装的C/C ++语言的编译器®系统,MATLAB选择一个作为默认编译器。如果所选的编译器与GPU编码器支持的版本不兼容,请更改选择。金宝app支持金宝appMicrosoft Visual Studioversions, seeInstalling Prerequisite Products

To change the default compiler, use theMEX -SETUP C ++命令。你打电话时MEX -SETUP C ++, MATLAB displays a message with links to set up a different compiler. Select a link and change the default compiler for building MEX files. The compiler that you choose remains the default until you callMEX -SETUP C ++选择其他默认值。有关更多信息,请参阅Change Default Compiler。这MEX -SETUP C ++命令仅更改C ++语言编译器。您还必须通过使用C更改C的默认编译器mex -setup C

LinuxPlatform

MATLAB和CUDA工具包仅支持Linux上C/C +金宝app+语言的GCC/G ++编译器®platforms. For supported GCC/G++ versions, seeInstalling Prerequisite Products

环境变量

Standalone code (static library, dynamically linked library, or executable program) generation has additional set up requirements. GPU Coder uses environment variables to locate the necessary tools, compilers, and libraries required for code generation.

Note

在Windows上,在构建过程中可以在工具,编译器和库的路径中的空间或特殊字符创建问题。您必须在不包含空格或更改Windows设置的位置安装第三方软件,以便为文件,文件夹和路径创建短名称。有关更多信息,请参阅使用Windows简短名称solution inMATLAB Answers

Platform Variable Name 描述
视窗 CUDA_PATH

通往CUDA工具包安装的路径。

例如:

C:\ Program Files \ NVIDIA GPU计算工具包\ CUDA \ V11.2 \

NVIDIA_CUDNN

Path to the root folder of cuDNN installation. The root folder contains the bin, include, and lib subfolders.

例如:

C:\ Program Files \ NVIDIA GPU计算工具包\ CUDA \ V11.2 \

NVIDIA_TENSORRT

Path to the root folder of TensorRT installation. The root folder contains the bin, data, include, and lib subfolders.

例如:

C:\ Program Files \ NVIDIA GPU计算工具包\ CUDA \ V11.2 \TensorRT\

OPENCV_DIR

通往主机上OPENCV构建文件夹的路径。构建和运行深度学习示例需要此变量。

例如:

C:\ Program Files \ OpenCV \ build

路径

Path to the CUDA executables. Generally, the CUDA Toolkit installer sets this value automatically.

例如:

C:\ Program Files \ NVIDIA GPU计算工具包\ CUDA \ V11.2 \bin

Path to thecudnn.dll动态库。该库的名称在您的安装上可能有所不同。

例如:

C:\ Program Files \ NVIDIA GPU计算工具包\ CUDA \ V11.2 \bin

Path to thenvinfer*dynamic libraries of TensorRT. The name of this library may be different on your installation.

例如:

C:\ Program Files \ NVIDIA GPU计算工具包\ CUDA \ V11.2 \TensorRT\lib

Path to theNSYSexecutable of NVIDIA Nsight™ systems.

例如:

C:\Program Files\NVIDIA Corporation\Nsight Systems 2021.1.1\target-windows-x64

Path to the Dynamic-link libraries (DLL) of OpenCV. This variable is required for running deep learning examples.

例如:

c:\ program文件\ opencv \ build \ x64 \ vc15 \ bin

Linux 路径

Path to the CUDA Toolkit executable.

例如:

/usr/local/cuda-11.2/bin

Path to theNSYSNvidia Nsight Systems的可执行文件。

例如:

/usr/local/Nsight Systems 2021.1.1/target-linux-x64

通往OpenCV库的路径。构建和运行深度学习示例需要此变量。

例如:

/usr/local/lib/

Path to the OpenCV header files. This variable is required for building deep learning examples.

例如:

/usr/local/include/opencv

LD_LIBRARY_PATH

Path to the CUDA library folder.

例如:

/usr/local/cuda-11.2/lib64

Path to the cuDNN library folder.

例如:

/usr/local/cuda-11.2/lib64/

tensorrt™库文件夹的路径。

例如:

/USR/local/cuda-11.2/tensorrt/lib/

Path to the ARM®Compute Library folder on the target hardware.

例如:

/usr/local/arm_compute/lib/

SetLD_LIBRARY_PATH在手臂目标硬件上。

NVIDIA_CUDNN

Path to the root folder of cuDNN library installation.

例如:

/usr/local/cuda-11.2/

NVIDIA_TENSORRT

通往Tensorrt库安装的根文件夹的路径。

例如:

/usr/local/cuda-11.2/TensorRT/

ARM_COMPUTELIB

在ARM COMPUTE库在ARM目标硬件上安装的词根文件夹的路径。在ARM目标硬件上设置此值。

例如:

/usr/local/arm_compute

Verify Setup

To verify that your development computer has all the tools and configuration needed for GPU code generation, use theCoder.CheckgPuinstallfunction. This function performs checks to verify if your environment has the all third-party tools and libraries required for GPU code generation. You must pass acoder.gpuEnvConfigobject to the function. This function verifies the GPU code generation environment based on the properties specified in the given configuration object.

You can also use the equivalent GUI-based application that performs the same checks and can be launched using the command,Check GPU Install

In the MATLAB Command Window, enter:

gpuEnvObj = coder.gpuEnvConfig; gpuEnvObj.BasicCodegen = 1; gpuEnvObj.BasicCodeexec = 1; gpuEnvObj.DeepLibTarget ='tensorrt';gpuenvobj.deepcodeexec = 1;gpuenvobj.deepcodegen = 1;结果= CODER.CHECKGPUINSTALL(GPUENVOBJ)

这output shown here is representative. Your results might differ.

兼容的GPU:传递的CUDA环境:传递运行时:传递Cufft:传递的Cusolver:传递的Cublas:传递的Cudnn环境:通过Tensorrt环境:传递的基本代码生成:传递基本代码执行:经过的深度学习(Tensorrt)代码生成:经过深入学习(经过深入学习:tensorrt)代码执行:传递结果=带有字段的结构:gpu:1 cuda:1 cudnn:1 tensorrt:1 basic codegen:1 basic -codeexec:1 deepcodegen:1 deepcodeexec:1 deepcodeexec:1 tensorrtdatatate:1 tensorrtdatate:1 pripling型:0分析:0

See Also

Apps

Functions

Objects

相关话题