Easy deployment and scaling of Vitis AI accelerators using InAccel

  • InAccel (HuggingFace) Spaces — ML apps that demonstrate the capabilities of the whole InAccel Vitis AI platform… in your browser.
  • InAccel (HuggingFace) Models — A version-controlled “mirror” of the Xilinx AI Model Zoo, for automating integration, deployment and delivery of FPGA models.
  • Xilinx AI Model Zoo — A comprehensive set of pre-optimized models that are ready to deploy on Xilinx devices.
  • Xilinx AI Optimizer — An optional model optimizer that can prune a model by up to 90%. It is separately available with commercial licenses.
  • Xilinx AI Quantizer — A powerful quantizer that supports model quantization, calibration, and fine tuning.
  • Xilinx AI Compiler — Compiles the quantized model to a high-efficient instruction set and data flow.
  • Xilinx AI Profiler — Perform an in-depth analysis of the efficiency and utilization of AI inference implementation.
  • InAccel Coral API — Offers high-level C/C++, Java, Python and Rust APIs for Vitis-AI applications. It is based on a client-server architecture and uses InAccel Coral to transform FPGA resources to a single pool of Vitis-AI runners.
  • InAccel Coral — Orchestration framework that allows the distributed acceleration of large data sets across clusters of FPGA resources using simple programming models. It is designed to scale up from single devices to hundreds of FPGAs, each offering local computation and storage.
  • InAccel Vitis-AI Runtime — An InAccel Coral-compliant runtime atop the open source Xilinx VART library.
  • Xilinx DPU — Efficient and scalable IP cores can be customized to meet the needs of diverse AI applications.
{
"name": "yolov3_adas_pruned_0_9.xmodel",
"bitstreamId": "vitis.ai.darknet",
"version": "1.3.1",
"description": "dk_yolov3_cityscapes_256_512_0.9_5.46G_1.3 (detection)",
"platform": {
"vendor": "xilinx",
"name": "u50",
"version": "vitis-ai/1.3"
},
"kernels": [
{
"name": [
"runner"
],
"kernelId": "yolov3-cityscapes256x512",
"arguments": [
{
"type": "float*",
"name": "input",
"access": "r"
},
{
"type": "float*",
"name": "output",
"access": "w"
}
]
}
]
}
  • A resource represents the FPGA DPU which can be reprogrammed with compatible xmodels.
  • A memory encapsulates the available device-side address space for DPU input/output data.
  • A buffer includes the host-side memory address of a tensor, as well as its size.
  • A compute unit stores information related to a DPU runner.
  1. Allocate input and output tensors using the InAccel allocator.
inaccel::vector<float> input(input_size);inaccel::vector<float> output(output_size);
inaccel::request yolov3(“vitis.ai.darknet.yolov3-cityscapes256x512”);
yolov3.arg(input).arg(output);
inaccel::submit(yolov3).get();

--

--

--

Applications Acceleration instantly

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

AMA Recap: CRYPTO MINERS X EPIK PROTOCOL

Factors you need to consider before investing in AI

factors to consider before investing in artificial intellegence

Subirority Complex — Issue #12

Segna Newsletter — 16 September 2021

Oracle Artificial Intelligence for Blockchains

The Importance of Feedback for Employees and AI

Skynet rising? — AI in the Digital Marketing World

Emergent Music

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
InAccel

InAccel

Applications Acceleration instantly

More from Medium

HuBERT Explained

HuBERT architecture introduction

Microsoft AI ML Workathon 2021 with Vision AI

AI DJ Project#2 Ubiquitous Rhythm — Improvised Jam Sessions with Real-time Music Generation AI

VESSL AI — Heading into 2022