Skip to main content
Impulses can be deployed via App Lab to your Arduino UNO Q. Arduino App Lab is a platform that enables developers to easily build and share Arduino applications and offers an intuitive interface to deploy the application locally in just one click. In Edge Impulse Studio you can deploy your model as a either Linux aarch64 or Linux Arduino UNO Q (GPU). Then, import it on your UNO Q to run edge AI applications that use your custom model.

Arduino App Lab version 0.1.23

This tutorial guides you through deploying App Lab examples applications, and new custom applications that use your own impulse, to the Arduino UNO Q. The tutorial is designed for the Arduino UNO Q (2Gb) using App Lab version 0.1.23.

Prerequisites

Before you start, make sure you have completed the Arduino UNO Q setup using a monitor, keyboard, and mouse to enable WiFi and SSH access. Additionally, make sure you followed one of the following tutorials and have a trained impulse:

Deploying an example application

The Arduino App Lab launches automatically when the UNO Q starts. If you don’t see it, go to Applications (top left corner), navigate to Accessories, and click on Arduino App Lab.

Keyword spotting

First we are going to deploy the Hey Arduino application in the App Lab examples onto the Arduino UNO Q.

Hey Arduino keyword spotting application from Arduino App Lab version 0.1.23

The Hey Arduino application is a keyword spotting application designed to trigger an LED matrix animation (a heart shape animation) when the phrase “Hey Arduino” is detected by the microphone. This application is interesting as it takes advantage of both the CPU and the MCU available on the Arduino UNO Q. The CPU handles the keyword spotting, and the onboard microcontroller runs the Sketch to visualize the LED matrix animations once the keyword has been recognized.

Hey Arduino app.yaml file from Arduino App Lab version 0.1.23

The application uses the keyword_spotting brick. A brick is like a code package that provides a specific functionality that is needed to the application. In this case, it adds the ability to detect sound patterns through a USB microphone and trigger an event when a match occurs. The keyword_spotting brick uses a pre-trained Edge Impulse audio classification model that identifies Hey Arduino. It continuously monitors the audio and when it detects the keyword it triggers the microcontroller using the Bridge tool to activate the LED animation. To deploy the application on your Arduino UNO Q click the green button in the top right corner. The App Launch starts deploying the application locally.

Hey Arduino application deploying to the Arduino UNO Q from the Arduino App Lab

Once the application is running on the UNO Q, say “Hey Arduino”. The LED animation should appear as soon as the keyword is detected.

Hey Arduino matrix animation on the Arduino UNO Q

Object detection

If you have connected a USB webcam to your Arduino UNO Q, you can test the application Detect objects on Camera. This application uses two bricks, the video_object_detection and the WebUI - HTML which hosts a web application and exposes APIs or Websockets to be used in the application.

Detect Object on Camera application from Arduino App Lab version 0.1.23

When you deploy this application on your Arduino UNO Q, a pre-trained Edge Impulse model uses the video_object_detection brick to find objects on a live video feed from the camera. To access the UI of the application, use the browser and go to the local IP address of your Arduino UNO Q using the port 7000.

Testing the Video Generic Object Detection application after deploying the Detect Object on Camera application from Arduino App Lab into the Arduino UNO Q

Deploying a custom application

Now that you have tested the Arduino App Lab on your Arduino UNO Q, it’s time to create your own application using an impulse trained in your Edge Impulse account. We are going to replicate the Detect Objects on Camera application and we will use a custom model, instead of the pre-trained model that comes with the brick. For this tutorial, we will use a face detection model.

Creating a new app

Click on “My Apps” in the left menu of the Arduino App Lab and then click on the top right hand corner button called Create new app +.

My Apps section in Arduino App Lab version 0.1.23

Give the application a name, and change the emoji if you’d like. Inside the application, click on the left menu to add the bricks mentioned earlier (video_object_detection and web_ui).

New application created with bricks in Arduino App Lab version 0.1.23

After you create the application, we will perform the rest of the steps over SSH, as some of the source code is not yet available in the App Lab GUI as of version 0.1.23.

Copying the impulse

Next, you will need the .eim file generated by Edge Impulse Studio for Linux aarch64. You can either download the Edge Impulse model using the Edge Impulse Linux CLI tools. You can manually copy the .eim file to the following folder of the Arduino UNO Q or use VS Code (see below):
/home/arduino/.arduino-bricks/ei-models/
Placing your model here ensures the video_object_detection brick will use it for inference, instead of the default pre-trained model.

Using VS Code

To copy the file from your local computer to the Arduino UNO Q using the VS Code, follow these instructions:
  • Connect to your Arduino UNO Q via SSH using the Remote SSH extension
  • Open the target folder
  • Drag and drop the .eim file from your local computer into that folder

VS Code access to the Arduino UNO Q to develop the new application

Building the application

Go to your application in the folder:
/home/arduino/ArduinoApps/<name of your application>
Edit the app.yaml file from SSH using this new variable for the video_object_detection brick.
name: Faces Detector
description: "Faces Detector by Edge Impulse"
ports: []
bricks:
- arduino:video_object_detection: {
    variables: {
      EI_OBJ_DETECTION_MODEL: /home/arduino/.arduino-bricks/ei-models/<name of your model>.eim
    }
  }
- arduino:web_ui: {}

icon: 😀
In this case, you need to edit the EI_OBJ_DETECTION_MODEL variable. You can update models for other bricks or use cases using these variables:
EI_AUDIO_CLASSIFICATION_MODEL
EI_CLASSIFICATION_MODEL
EI_KEYWORD_SPOTTING_MODEL
EI_MOTION_DETECTION_MODEL
EI_OBJ_DETECTION_MODEL
EI_V_ANOMALY_DETECTION_MODEL

Copying additional assets

After you have saved your changes, follow the instructions below to copy the assets and webUI files from the example to your application and copy the main Python script.
cd ~/.local/share/arduino-app-cli/examples/video-generic-object-detection

cp -r assets/* /home/arduino/ArduinoApps/<name of your application>

cp python/main.py /home/arduino/ArduinoApps/<name of your application>/python/
The application is now functionally identical to the Detect objects on Camera example that we used before, except it uses your own custom model trained with Edge Impulse Studio.

Running the application

To run the application, type:
cd /home/arduino/ArduinoApps/<name of your application>

python3 main.py
Deploy the application and open a browser and access the local IP address (or localhost in the Arduino UNO Q) with port 7000 to see the application. Now you will see the custom model detecting faces instead of generic objects.

New application deployed in the Arduino UNO Q via Arduino App Lab with an Edge Impulse custom model

Feel free to create your own Arduino App Lab application and share your feedback with us in the Edge Impulse forum.
I