深度学习HDL工具箱

深度学习HDL工具箱

Prototype and deploy deep learning networks on FPGAs and SoCs

开始:

对FPGA的深入学习推论

Prototype and implement deep learning networks on FPGAs for edge deployment.

Programmable Deep Learning Processor

The toolbox includes a deep learning processor that features generic convolution and fully-connected layers controlled by scheduling logic. This deep learning processor performs FPGA-based inferencing of networks developed using深度学习工具箱™。高带宽内存界面速度的存储器转移层和权重数据。

The deep learning processor contains generic convolution and fully-connected processing modules that are programmed to execute the specified network.

深度学习处理器体系结构。

汇编和部署

Compile your deep learning network into a set of instructions to be run by the deep learning processor. Deploy to the FPGA and run prediction while capturing actual on-device performance metrics.

将您的深度学习网络编译成一组指令,以部署到深度学习处理器。

Compiling and deploying a YOLO v2 network.

FPGA-Based Inferencing in MATLAB

从MATLAB对FPGA进行深入学习推断。

Creating a Network for Deployment

Begin by using Deep Learning Toolbox to design, train, and analyze your deep learning network for tasks such as object detection or classification. You can also start by importing a trained network or layers from other frameworks.

Deploying Your Network to the FPGA

一旦您拥有训练有素的网络,请使用deploycommand to program the FPGA with the deep learning processor along with the Ethernet or JTAG interface. Then use thecompilecommand to generate a set of instructions for your trained network without reprogramming the FPGA.

Using MATLAB to configure the board and interface, compile the network, and deploy to the FPGA.

Use MATLAB to configure the board and interface, compile the network, and deploy to the FPGA.

作为MATLAB应用程序的一部分运行基于FPGA的推断

Run your entire application in MATLAB®, including your test bench, preprocessing and post-processing algorithms, and the FPGA-based deep learning inferencing. A single MATLAB command,预测,在FPGA上执行推断,并将结果返回到MATLAB工作区。

MATLABloop that captures an image, preprocesses it by resizing for AlexNet, runs deep learning inferencing on the FPGA, and then post-processes and displays the results.

Run MATLAB applications that perform deep learning inferencing on the FPGA.

网络自定义

调整您的深度学习网络,以满足目标FPGA或SOC设备的特定应用程序要求。

配置文件FPGA推断

Measure layer-level latency as you run predictions on the FPGA to find performance bottlenecks.

Deep learning inference profiling metrics.

Profile deep learning network inference on an FPGA from MATLAB.

Tune the Network Design

使用配置文件指标,使用深度学习工具箱调整网络配置。例如,使用深网设计器添加图层,删除层或创建新连接。

部署自定义RTL实现

将深度学习处理器的自定义RTL实施部署到使用HDL编码器的任何FPGA,ASIC或SOC设备。

自定义深度学习处理器配置

指定用于实现深度学习处理器的硬件体系结构选项,例如并行线程的数量或最大图层大小。

Generate Synthesizable RTL

Use HDL Coder to generate synthesizable RTL from the deep learning processor for use in a variety of implementation workflows and devices. Reuse the same deep learning processor for prototype and production deployment.

DLHDL.BuildProcessor类可从自定义深度学习处理器生成可综合的RTL。

Generate synthesizable RTL from the deep learning processor.

Generate IP Cores for Integration

当HDL编码器从深度学习处理器中生成RTL时,它还生成具有标准AXI接口的IP核心,以集成到您的SOC参考设计中。

HDL编码器生成一个IP核心,该IP核心将深度学习处理器输入和输出映射到AXI接口。

Target platform interface table showing the mapping between I/O and AXI interfaces.