Developing AI Inference Solutions with
the Vitis AI Platform

Learn how to use the Vitis™ AI development platform in conjunction with DNN algorithms, models, inference and training, and frameworks on cloud and edge computing platforms.

TechSource Systems Pte Ltd

Course
Highlights

This course describes how to use the Vitis™ AI development platform
in conjunction with DNN algorithms, models, inference and training,
and frameworks on cloud and edge computing platforms.

The emphasis of this course is on:

  • Illustrating the Vitis AI tool flow
  • Utilizing the architectural features of the Deep Learning Processor Unit (DPU)
  • Optimizing a model using the AI quantizer and AI compiler
  • Utilizing the Vitis AI Library to optimize pre-processing and
    post-processing functions
  • Creating a custom platform and application
  • Deploying a design
  • Providing an overview of the Xilinx Kria™ K26 SOM and its
    advantages

What’s New for 1.4.1

  • All modules: Support added for the VCK190 & VCK5000 boards and the Kria SOM KV260
  • Frameworks Supported by the Vitis AI Development Environment
    module: Support for 16 new models added—total of 108 models from different deep learning frameworks (Caffe, TensorFlow, TensorFlow 2 and PyTorch)
  • AI Quantizer and AI Compiler module: PyTorch support updated
    from version 1.5 to 1.7.1
  • Vitis AI Library module: New Graph Runner API introduced for
    DPU/CPU subgraph inference
  • Two new modules added:
    • Xilinx Kria KV260 Vision AI Starter Kit Overview (Lecture)
    • Customizing the AI Models (Lecture, Lab)
  • All labs have been updated to the latest software versions
TechSource Systems Pte Ltd

Who Should
Attend

Software and hardware developers, AI/ML engineers, data scientists, and anyone who needs to accelerate their software applications using Xilinx devices

TechSource Systems Pte Ltd

Course
Prerequisites

TechSource Systems Pte Ltd

Course
Benefits

After completing this comprehensive training, you will have the
necessary skills to:

  • Describe Xilinx machine learning solutions with the Vitis AI development environment
  • Describe the supported frameworks, network modes, and
    pre-trained models for cloud and edge applications
  • Utilize DNN algorithms, models, inference and training, and
    frameworks on cloud and edge computing platforms
  • Use the Vitis AI quantizer and AI compiler to optimize a trained
    model
  • Use the architectural features of the DPU processing engine to
    optimize a model for an edge application
  • Identify the high-level libraries and APIs that come with the Xilinx Vitis AI Library
  • Create a custom hardware overlay based on application
    requirements
  • Create a custom application using a custom hardware overlay
    and deploy the design
  • Describe the Kria K26 SOM and its advantages
  • Customize the AI models used in the applications in the Kria K26
    SOM

Partners

TechSource Systems Pte Ltd
TechSource Systems Pte Ltd

TechSource Systems is MathWorks Authorised Reseller and Training Partner

Upcoming Program

  • Please keep me posted on the next schedule
  • Please contact me to arrange customized/ in-house training

Course Outline

Vitis AI Environment Overview

  • Introduction to the Vitis AI Development Environment – Describes the Vitis AI development environment, which consists of the Vitis AI development kit, for AI inference on Xilinx hardware platforms, including both edge devices and Alveo accelerator cards.
  • Frameworks Supported by the Vitis AI Development
    Environment
    – Discusses the support for many common machine learning frameworks such as Caffe, TensorFlow, and PyTorch.
  • Setting Up the Vitis AI Development Environment – Demonstrates the steps to set up a host machine for developing and running AI inference applications on cloud or embedded devices.
TechSource Systems Pte Ltd
TechSource Systems Pte Ltd

ML Concepts

  • Overview of ML Concepts – Overview of ML concepts such as DNN algorithms, models, inference and training, and frameworks.

Vitis AI Environment Toolchain

  • AI Optimizer – Describes the optimization of a trained model that can prune a model up to 90%.
    This topic is for advanced users and will be covered in detail in the Advanced ML training course.
  • AI Quantizer and AI Compiler – Describes the AI quantizer, which supports model quantization, calibration, and fine tuning. Also describes the AI compiler tool flow.
    With these tools, deep learning algorithms can deploy in the Deep
    Learning Processor Unit (DPU), which is an efficient hardware platform running on a Xilinx FPGA or SoC.
TechSource Systems Pte Ltd
TechSource Systems Pte Ltd

Profiler

  • AI Profiler – Describes the AI profiler, which provides layer-by-layer analysis to help with bottlenecks. Also covers debugging the DPU running result.

Deep Learning Processor Unit (DPU)

  • Introduction to the Deep Learning Processor Unit (DPU) – Describes the Deep Learning Processor Unit (DPU) and its
    variants for edge and cloud applications.
  • DPUCADX8G Architecture Overview – Overview of the DPUCADX8G architecture, supported CNN operations, and design considerations.
  • DPUCZDX8G Architecture Overview – Overview of the DPUCZDX8G architecture, supported CNN operations, DPU data flow, and design considerations.
TechSource Systems Pte Ltd

AI Libraries

  • Vitis AI Library – Reviews the Vitis AI Library, which is a set of high-level libraries and APIs built for efficient AI inference with the DPU. It provides an easy-to-use and unified interface for encapsulating many efficient and high-quality neural networks.
    Note that the edge flow version of the lab is not available in the
    On-Demand curriculum because an evaluation board is required
    for the entirety of the lab.
TechSource Systems Pte Ltd
TechSource Systems Pte Ltd

Custom Hardware and Application Development

  • Creating a Custom Hardware Platform with the DPU Using the Vivado Design Suite Flow (Edge) – Illustrates the steps to build a Vivado Design Suite project, add the DPUCZDX8G IP, and run the design on a target board.
  • Creating a DPU Kernel Using the Vitis Environment Flow (Edge) – Illustrates the steps to build a Vitis unified software platform project that adds the DPU as the kernel (hardware accelerator)
    and to run the design on a target board.
  • Creating a Vitis Embedded Acceleration Platform (Edge) – Describes the Vitis embedded acceleration platform, which
    provides product developers an environment for creating embedded software and accelerated applications on heterogeneous platforms based on FPGAs, Zynq® SoCs, and Alveo data center cards.
  • Creating a Custom Application (Edge) – Illustrates the steps to create a custom application, including building the hardware and Linux image, optimizing the trained
    model, and using the optimized model to accelerate a design.

Kria SOM (Optional)

  • Xilinx Kria KV260 Vision AI Starter Kit Overview – Provides an overview of the Xilinx Kria KV260 Vision AI Starter Kit, its features, and interfaces. The boot devices, heat sink, firmware, and power-on sequence for the kit are also described.
  • Customizing the AI Models – Shows how to customize the AI models used in the accelerated applications.
TechSource Systems Pte Ltd
QUICK ENQUIRY