
- Comes with Ubuntu 20.04.6 LTS (Focal Fossa) out of the box.
- Camera resolution/frame rate – 12MP at 30fps (per camera)
- Lens Interface - C/CS, 12mm variable focal length, F2.8-F16 aperture (per camera)
- Compatible with Android, Ubuntu with Embedded Linux (Yocto) coming soon
- Inputs / Outputs: Ethernet, USB-C, DisplayPort channels, HDMI, GPIO / IO, etc.
- Power: 9-36 V DC. Operating temperature: −35 °C to +75 °C
1. Setting Up Your Qualcomm Dragonwing Triple Vision Industrial AI Camera
Hardware Setup
- Connect the camera platform to power.
- Connect up to three cameras
- Attach a display via HDMI if needed.
- Attach a mouse and keyboard to the USB-C port if needed
- Connect via SSH for headless operation.
- It’s recommended to use a HDMI display and mouse and keyboard for the configuration when you bring up the board for the first time.
Connecting to the internet
Ethernet connection is recommended, however, you can activate a WiFi connection by following these steps.- Remount and enable read-and-write access to the default read-only rootfs filesystem prior to editing the ‘/data/misc/wifi/wpa_supplicant.conf’ file:
- Stop wpa_supplicant:
- Modify the content of the default
wpa_supplicant.conf
file to match the SSID and password of your router. You can usevi
on the device to edit the file:
You can refer to the following configurations for security types specified in the default
wpa_supplicant.conf
file at /etc
to add your required router configurations.- Save the modified
wpa_supplicant.conf
file and verify its content using the following command:
- Reboot or power cycle the device. Wait for approximately one minute to establish a WLAN connection with the updated SSID and password.
Enable SSH
Check if SSH is bound to localhost only:
Desktop environment
This system is not running a standard desktop environment like GNOME or KDE. Instead, it’s using Weston, a minimal and lightweight “compositor” that provides the basic foundation for a graphical session on top of the modern Wayland display protocol.2. Installing the Edge Impulse Linux CLI
Once rebooted, open up the terminal once again, and install the Edge Impulse CLI and other dependencies via:Make note the additional commands shown at the end of the installation process; the
source ~/.profile
command will be needed prior to running Edge Impulse in subsequent sessions.3. Connecting to Edge Impulse
With all dependencies set up, run:--clean
argument.

4. Verifying that your device is connected
That’s all! Your device is now connected to Edge Impulse. To verify this, go to your Edge Impulse project, and click Devices. The device will be listed here.
Next steps: building a machine learning model
With everything set up you can now build your first machine learning model with these tutorials:- Responding to your voice
- Recognize sounds from audio
- Adding sight to your sensors
- Object detection
- Visual anomaly detection with FOMO-AD
Deploying back to device
You have multiple ways to deploy the model back to the device.Using the Edge Impulse Linux CLI
To run your Impulse locally on the device, open a terminal and run:--clean
to switch projects).
Alternatively, you can select the Linux (AARCH64 with Qualcomm QNN) option in the Deployment page.

.eim
model that you can run on your board with the following command:
--camera
argument to select which camera you want to use and pass PORT to select the preview port number:

Using the Edge Impulse Linux Inferencing SDKs
Our Linux SDK has examples on how to integrate the.eim
model with your favorite programming language.
You can download either the quantized version and the float32 versions of your model, but the Qualcomm NN accelerator only supports quantized models. If you select the float32 version, the model will run on CPU.
Using the IM SDK GStreamer option
When selecting this option, you will obtain a.zip
folder. We provide instructions in the README.md
file included in the compressed folder.
See more information on Qualcomm IM SDK GStreamer pipeline.
Image model?
If you have an image model then you can get a peek of what your device sees by being on the same network as your device, and finding the ‘Want to see a feed of the camera and live classification in your browser’ message in the console. Open the URL in a browser and both the camera feed and the classification are shown:
Live feed with classification results
Useful tips
Running from/data
If the filesystem (e.g., /data) is mounted with the noexec
flag, Linux will refuse to execute any binaries from it.
So if you are running the software from inside /data do the following:
Executing from SSH session
Wayland/Weston compositor needs to be configured to work from SSH session: We need the following two lines:
Testing the camera pipeline:
camera=0,1,2
to switch between cameras
This command should show you live stream of the camera.

Edge Impulse GST Plugin:
If you are looking for a way to run the impulse natively in gstreamer, you can use the following plugin: https://github.com/edgeimpulse/gst-plugins-edgeimpulse If you build your model into this plugin, you will get alibgstedgeimpulse.so
file that you can install in your system:
the Qualcomm system, install to GStreamer plugins directory_
