Introduction

Renesas RZ and DRP-AI
Renesas is a leading producer of a variety of specialized Microprocessor and Microcontroller solutions which are found at the heart of many industrial and consumers systems. Their RZ family of Microprocessors includes a range of Arm Cortex-A based multicore models targeting a wide range of applications from, industrial networking (RZ/N) and real time control (RZ/T), to general purpose/HMI and graphics applications (RZ/A, RZ/G) and finally AI-based computer vision applications (RZ/V). At the heart of the AI focused RZ/V MPU series is Renesas’ own DRP-AI ML accelerator. DRP-AI is a low power high performance ML accelerator that was designed around Renesas Dynamic Reconfigurable Processor (DRP) technology originally created to accelerate computer vision applications with DRP being especially useful in speeding up pre and post processing of image data in a computer vision pipeline. A traditional CPU has fixed data paths and algorithms are implemented by instructions or software written by a developer to manipulate how these fixed data paths are used. DRP is a form of reprogrammable hardware that is able to change its processing data paths during run time. This capability is referred to as its Dynamic Reconfiguration feature which enables DRP to provide the optimal hardware based implementation of an algorithm adapting its computing path ways to implement the algorithm in the most efficient way possible. The data path configuration that is loaded into the DRP specifies the operations and interconnections that the DRP will implement in hardware. DRP contains a Finite State Machine known as a State Transition Controller (STC) that manages the data path configuration in hardware and allows for changing out of data path configurations during run time.
Dynamic Reconfiguration of Data Paths

Dynamic Loading for Computer Vision Acceleration with DRP

AI MAC found in DRP-AI

DRP-AI Operation

DRP-AI vs alternatives
Hardware Support for DRP-AI
DRP-AI is built into the RZ/V MPU family which includes a range of models designed for image processing, AI and general purposes applications with new parts being added in future.
RZ/V Series Product Roadmap

RZ/V2L Architecture
Development Board Options
Renesas RZ/V2L Evaluation Kit
The Renesas RZ/V2L Evaluation Kits comes in the form a SMARC v2.1 Module that has the RZ/V2L and supporting hardware bundled together with a SMARC carrier board that provides Dual Gigabit Ethernet, MIPI Camera, MicroHDMI, CAN, PMOD and audio interfaces. The RZ/V2L Evaluation kit also runs Yocto Linux and is geared towards product developers and professional applications.
Renesas RZ/V2L Evaluation Kit
Avnet RZBoard V2L
The Avnet RZBoard V2L is an alternative option also based on the RZ/V2L that is ideal for quick prototyping and rapid deployments and comes in the form of a cost effective Single Board Computer with the Raspberry Pi form factor. It also includes Bluetooth and WiFi connectivity over and above the GigaBit Ethernet for a wide variety of AIoT applications.
Avnet RZBoard V2L
Using DRP-AI with Your Own Model
Utilizing the features of DRP-AI requires that your model is preprocessed and prepared before deployment in your application. This is done by means of a special software called DRP-AI translator which effectively converts or translates your model into a form that can leverage all the benefits of DRP-AI mentioned above. There is a lot of things happening behind the scenes within the DRP-AI accelerator. The actual implementation is transparent to the user who does not need to understand how to directly optimize their model and work on the DRP-AI hardware and how DRP-AI works at a low level. There are two tools available to users to take of this by translating you model into the suitable underlying configurations that makes it possible to run on DRP-AI. One tool is called DRP-AI Translator and the other is called DRP-AI TVM both of which create a DRP-AI optimized version of your model but are used in different scenarios. The output of both these tool then tells the DRP-AI hardware to optimally execute your model maximizing performance at the lowest power consumption. DRP-AI Translator DRP-AI Translator adds an additional step into your ML Ops workflow and takes ONNX models as input and then converts the model into the necessary components needed to configure and instruct the DRP and AI MAC found within DRP-AI.
DRP-AI Translator

DRP-AI Translator vs DRP\_AI TVM
Working with DRP-AI Translator
To directly utilize DRP-AI Translator the process requires additional configuration files to be provided with the ONNX model files to enable DRP AI to create the necessary configuration files to setup the DRP-AI hardware to execute your model. The additional input files are created in YAML and you need to provide Pre and Post Processing definition files together with an address map:
DRP-AI configuration files

DRP-AI Translator files
Simple DRP-AI Edge Impulse
Whether you wish to use DRP-AI Translator or DRP-AI TVM both tools that require understanding and expertise to use. The learning curve and effort required adds additional delays and costs into creating your end application which is why you are using ML in the first place. Unless you are working with a custom model architecture you are most likely needing to use Deep Learning to for Object Detection and Image classification which are the most common applications of AI vision. Edge Impulse includes built in DRP-AI support for YOLOv5 and Edge Impulse’s own FOMO for Object Detection as well as MobileNet V1 and V2 for Image Classification. With Edge Impulse’s support of DRP-AI all of this is done behind the scenes with DRP-AI Translator and the associated configurations taking place in the back for the supported models. There is no need to work with the configuration files or read and understand lengthy manuals or understand the whole process of working with DRP-AI Translator and the associated input and output files. All that is needed is a few clicks to add DRP-AI support to existing or new models.
DRP-AI Translator vs Edge Impulse
Model Creation for DRP-AI
To add DRP-AI support in your ML projects with Edge Impulse, all that is needed is to select the Renesas RZ/V2L (with DRP-AI accelerator) target from the Target selection menu in the Edge Impulse Studio. This invoke the DRP-AI Translator tool and input file generation automatically behind the scenes while also ensuring it works with your custom model.
Enabling DRP-AI in Edge Impulse Studio

YOLO for DRP-AI in Edge Impulse Studio
Deployment with Edge Impulse
Once you have completed the process of building your model the next step is to actually deploy the model your hardware. For quick testing of your model directly on the RZ/V2L Evaluation kit you can use the Edge Impulse CLI specifically theedge-impulse-linux-runner
command from the RZ/V2L board itself after installing all Edge Impulse CLI. This deploy the model directly to your board hosted in Edge Impulse’s TypeScript based Web Deployment and you can connect to the running model from your browser and evaluate performance.
You will ultimately want to deploy the model into a custom application on your own custom application and the two choices you have are to use the C++ DRP-AI Library for embedding in a custom C++ application or the EIM deployment.


EIM usage
Deployment Examples
Both Object Detection and Image Classification can be used on their own however it is useful to combine them together where the Object Detector first locates specific objects in a frame and then Classifier is used to further classify an object into different categories. This requires the Object Detector to run first then the output of the Object Detector is passed to the Classifier. The output being Object Detection Bounding boxes with a classification label.
Two Stage AI Vision Pipeline
app.py
which contains the main 2 stage pipeline and web server and eim.py
which is a custom Python SDK for using EIM’s in your own application
To configure the application various configuration options are available in the Application Configuration Options section near the top of the application:
- RZ/V2L Evaluation Kit/ Avnet RZBoard
- USB Web Cam
- Yocto Linux with support for Python3, OpenCV for Python and Edge Impulse CLI
Product Quality Inspection - Candy Inspection
A possible use case of this is to detect product quality where a candy Detection Model was trained together with a classification model to detect the quality of the candy.
Two Stage pipeline used for Candy Classification (QC Application)

Pose Detection on Renesas RZ/V2L with DRP-AI
Edge Impulse has created a feature processing block that contains a pretrained PoseNet model. This Posenet block produces body keypoints as features from input images instead of producing raw scaled image data features as is normally image classification models. To train this model you need to capture images of different poses and label the images according to the poses.

