Xilinx

FPGA Design

Developing AI Inference Solutions with the Vitis AI Platform

Course Description

This course describes how to use the Vitis™ AI development platform in conjunction with DNN algorithms, models, inference and training, and frameworks on cloud and edge computing platforms.

The emphasis of this course is on:

  • Illustrating the Vitis AI tool flow

  • Utilizing the architectural features of the Deep Learning Processor Unit (DPU)

  • Optimizing a model using the AI quantizer and AI compiler

  • Utilizing the Vitis AI Library to optimize pre-processing and post-processing functions

  • Creating a custom platform and application

  • Deploying a design

Partners 

Upcoming Program

TechSource Systems is the
Sole Distributor and Authorised Training Partner of Mathworks Products

Who Should Attend

Software and hardware developers, AI/ML engineers, data scientists, and anyone who needs to accelerate their software applications using Xilinx devices

Duration

2 Days

Software Tools

  • Vitis AI development environment 1.1

  • Vivado Design Suite 2019.2

Hardware 

Architecture: Xilinx Alveo™ accelerator cards, Xilinx SoCs, and ACAPs

Prerequisites

  • Basic knowledge of machine learning concepts

  • Comfort with the C/C++/Python programming language

  • Software development flow

Skills Gained

After completing this comprehensive training, you will have the necessary skills to:

  • Describe Xilinx machine learning solutions with the Vitis AI development environment

  • Describe the supported frameworks, network modes, and pre-trained models for cloud and edge applications

  • Utilize DNN algorithms, models, inference and training, and frameworks on cloud and edge computing platforms

  • Use the Vitis AI quantizer and AI compiler to optimize a trained model

  • Use the architectural features of the DPU processing engine to optimize a model for an edge application

  • Identify the high-level libraries and APIs that come with the Xilinx Vitis AI Library

  • Create a custom hardware overlay based on application requirements

  • Create a custom application using a custom hardware overlay and deploy the design

Course Outline

  • Introduction to the Vitis AI Development Environment - Describes the Vitis AI development environment, which consists of the Vitis AI development kit, for AI inference on Xilinx hardware platforms, including both edge devices and Alveo accelerator cards. {Lecture}

  • Overview of ML Concepts - Overview of ML concepts such as DNN algorithms, models, inference and training, and frameworks. {Lecture}

  • Frameworks Supported by the Vitis AI Development Environment - Discusses the support for many common machine learning frameworks such as Caffe and TensorFlow. {Lecture}

  • Setting Up the Vitis AI Development Environment - Demonstrates the steps to set up a host machine for developing and running AI inference applications on cloud or embedded devices. {Demo}

  • AI Optimizer - Describes the optimization of a trained model that can prune a model up to 90%.This topic is for advanced users and will be covered in detail in the Advanced ML training course. {Lecture}

  • AI Quantizer and AI Compiler - Describes the AI quantizer, which supports model quantization, calibration, and fine tuning. Also describes the AI compiler tool flow. With these tools, deep learning algorithms can deploy in the Deep Learning Processor Unit (DPU), which is an efficient hardware platform running on a Xilinx FPGA or SoC. {Lecture, Lab}

  • AI Profiler and AI Debugger - Describes the AI profiler, which provides layer-by-layer analysis to help with bottlenecks. Also covers debugging the DPU running result. {Lecture}

  • Introduction to the Deep Learning Processor Unit (DPU) - Describes the Deep Learning Processor Unit (DPU) and its variants for edge and cloud applications. {Lecture}

  • DPU-V1 Architecture Overview - Overview of the DPU-V1 architecture, supported CNN operations, and design considerations. {Lecture}

  • DPU-V2 Architecture Overview - Overview of the DPU-V2 architecture, supported CNN operations, DPU data flow, and design considerations. {Lecture}

  • Vitis AI Library - Reviews the Vitis AI Library, which is a set of high-level libraries and APIs built for efficient AI inference with the DPU. It provides an easy-to-use and unified interface for encapsulating many efficient and high-quality neural networks. {Lecture, Lab}

  • Creating a Custom Hardware Platform Using the Vivado Design Suite Flow (Edge) - Describes the steps to build a Vivado Design Suite project, add the DPU-V2 IP, and run the design on a target board. {Lab}

  • Creating a Custom Application (Coming Soon) - Illustrates the steps to create a custom application, such as building the Linux image, optimizing the trained model, and using the optimized model to accelerate the design. {Lecture, Lab}

  • Creating a Custom Hardware Platform Using the Vitis Environment Flow (Edge) - Describe the steps to build a Vitis unified software platform project that adds the DPU as the kernel (hardware accelerator) and to run the design on a target board. {Lab}