The Qualcomm Dragonwing RB3 Gen 2 Development Kit is a powerful Linux-based development board based around the QCS6490 SoC. It has two built-in cameras, a Kryo™ 670 CPU, Adreno™ 643L GPU and 12 TOPS Hexagon™ 770 NPU. It's fully supported by Edge Impulse - you'll be able to sample raw data, build models, and deploy trained machine learning models directly from the Studio.
Install the Edge Impulse CLI on your computer.
Connect power to the back of the RB3 Development Kit.
Connect the RB3 to your computer using a micro-USB cable (using the port highlighted in yellow):
Open a serial connection between your host computer and the board.
You can do this directly using the Edge Impulse CLI by running the following command from your command prompt or terminal:
Hold the rightmost push button (seen from the front, highlighted in red) for ~2 seconds. You should see output in the terminal indicating that the board is starting up.
After 30-60 seconds you should see a login prompt in your terminal. Log in with:
Username: root
Password oelinux123
Next, set up a network connection, either:
Connect an Ethernet cable.
Or, if you want to connect over WiFi:
Qualcomm Linux <1.3: edit the wpa_supplicant.conf.
Qualcomm Linux 1.3: use nmcli.
After connecting the board to the internet, reboot it. This will refresh the system clock (through the NTP), resolving an issue with invalid certificates when installing the Edge Impulse CLI.
If you want to continue setting up over ssh (so you can unplug the device from your computer), find your IP address via:
Then log in via ssh (password: oelinux123
):
On the RB3 install the Edge Impulse CLI and other dependencies via:
With all dependencies set up, run:
This will start a wizard which asks you to log in and choose an Edge Impulse project. If you want to switch projects, or use a different camera (e.g. a USB camera) run the command with the --clean
argument.
That's all! Your device is now connected to Edge Impulse. To verify this, go to your Edge Impulse project, and click Devices. The device will be listed here.
With everything set up you can now build your first machine learning model with these tutorials:
Looking to connect different sensors? Our Linux SDK lets you easily send data from any sensor and any programming language (with examples in Node.js, Python, Go and C++) into Edge Impulse.
To profile your models for the Qualcomm Dragonwing RB3 Gen2 Development Kit:
Make sure to select the Qualcomm Dragonwing RB3 Gen 2 Development Kit as your target device. You can change the target at the top of the page near your user's logo.
Head to your Learning block page in Edge Impulse Studio.
Click on the Calculate performance button.
To provide the on-device performance, we use Qualcomm AI Hub in the background (see the image below) which run the compiled model on a physical device to gather metrics such as the mapping of model layers to compute units, inference latency, and peak memory usage. See more on Qualcomm® AI Hub documentation page.
To run your impulse locally on the RB3, open a terminal and run:
This will automatically compile your model with full hardware acceleration, download the model to your RB3 Gen 2, and then start classifying (use --clean
to switch projects).
Alternatively, you can select the Linux (AARCH64 with Qualcomm QNN) option in the Deployment page.
This will download an .eim
model that you can run on your board with the following command:
Our Linux SDK has examples on how to integrate the .eim
model with your favourite programming language.
You can download either the quantized version and the float32 versions but Qualcomm NN accelerator only supports quantized models. If you select the float32 version, the model will run on CPU.
When selecting this option, you will obtain a .zip
folder. We provide instructions in the README.md
file included in the compressed folder.
See more information on Qualcomm IM SDK GStreamer pipeline.
If you have an image model then you can get a peek of what your device sees by being on the same network as your device, and finding the 'Want to see a feed of the camera and live classification in your browser' message in the console. Open the URL in a browser and both the camera feed and the classification are shown:
If you start the CLI, and see:
You'll need to restart the camera server via: