Deploying OpenVINO™ Toolkit on Raptor Lake-P

This document will lead you through the process of deploying the Intel® Distribution of the OpenVINO™ Toolkit on COM Express Type 6 Raptor Lake-P. Installation of the OpenVINO™ Toolkit on the COM Express Type 6 Raptor Lake-P can be found here. The link allows you to choose your desired package based on your host system; it is available for Windows, Linux, and macOS.

Included in the download package

  • Runtime/Inference Engine
  • Model Optimizer
  • Benchmark Tool
  • Accuracy Checker
  • Annotation Converter
  • Post-Training Optimization Tool
  • Model Downloader and other Open Model Zoo tools

The latest version of OpenVINO™ Toolkit 2022.1 has pre-trained models and algorithms. This version comes with a number of tutorials and frameworks. These frameworks can be adjusted to suit your needs.

Frameworks available on latest OpenVINO™ Toolkit

  • Caffe
  • Kaldi
  • MXNet
  • ONNX
  • PyTorch
  • TensorFlow 1.x
  • TensorFlow 2.x

OpenVINO Notebooks can be used to create your own model based on your requirements. The COM Express Type 6 Raptor Lake-P platform enables the training of models and algorithms on both a CPU and GPU. The COM Express Type 6 Raptor Lake-P is also VPU compatible. If you wish to enable the VPU, you can use Intel® Neural Compute Stick 2. To make sure you don’t miss any updates, follow our YouTube channel.