Qualcomm Device Cloud
The Qualcomm® Device Cloud{target="_blank"} lets you remotely access real devices, as an Edge Impulse user this means you can get started without any investment in physical hardware. Gain access to devices like the Qualcomm Dragonwing RB3 Gen 2 Dev Kit{target="_blank"} to get started with AI on Qualcomm® hardware. Users get 5000 minutes of free device time, which is more than enough to run inference on a few static images, and do some initial development before you need to invest in hardware.

In this guide, we will show you how to:
Spin up a virtual Qualcomm Dragonwing RB3 Gen 2 in Qualcomm Device Cloud (QDC).
Install the Edge Impulse CLI on the device.
Connect the device to Edge Impulse Studio.
Run AI inference on a static test image.

Spin up a virtual Qualcomm Dragonwing RB3 Gen 2 in Qualcomm Device Cloud (QDC), install the Edge Impulse CLI, and run AI inference on a static test image.
Prerequisites
Qualcomm Device Cloud account – Sign up for free access to the Device Cloud.
Edge Impulse account – Sign up if you don’t already have one.
1. Launch an interactive RB3 Gen 2 session
Click the Devices tab in the Qualcomm Device Cloud web UI, then select Advanced on-device AI with Qualcomm Dragonwing™ RB3 Gen 2 You should see a suggestion to Try Now. If you don’t see this option, you may need to request access to the RB3 Gen 2 device type.

Log in to QDC > Devices > IoT > RB3 Gen 2 > Start Interactive Session.
Session mode
SSH only – headless shell (fastest).
Screen mirroring + SSH – adds VNC if you need the GUI.
Package upload – This is where you can upload files to the board. Create a zip with your test image (e.g.,
example.jpg
) and upload it here.If you skip this step, you can upload files later using the QDC web UI.
Click Start. QDC powers on the board and shows your SSH credentials.
The following steps will mirror the steps you would take on a physical Qualcomm Dragonwing RB3 Gen 2 Dev Kit tutorial.
2. Install the Edge Impulse CLI
QDC images are minimal. We have prepared a script to install Node.js and the Edge Impulse CLI once per session:
wget https://cdn.edgeimpulse.com/firmware/linux/setup-edge-impulse-qc-linux.sh
sh setup-edge-impulse-qc-linux.sh

3. Initialise the CLI environment
source ~/.profile
edge-impulse-linux --version # verify installation

4. Connect the board to Edge Impulse Studio
edge-impulse-linux
Paste the one-time authentication key from Studio > Devices > Connect.
Select or create a project.
Camera prompt: Choose None – we will use a static image.
If you ever need to reset the configuration:
edge-impulse-linux --clean

5. Run inference on a static image
5.1 Upload a test image

Use the QDC web UI to upload example.jpg
(or any JPEG/PNG) to /data/local/tmp
on the board, then move it to your home directory:
mv /data/local/tmp/example.jpg ~/
5.2 Classify the image
edge-impulse-linux-runner --disable-camera --image ~/example.jpg
The runner downloads your model, performs inference, and prints the predicted label(s) and confidence.
Summary
You now have access to a virtual RB3 Gen 2 in Qualcomm Device Cloud, installed the Edge Impulse CLI, connected the board to Edge Impulse Studio, and ran AI inference on a static test image without physical hardware on your desk.
Next steps
Now you can explore more advanced scenarios, such as streaming live camera data or running inference on a physical RB3 Gen 2 board.
Last updated
Was this helpful?