Comment on page

NVIDIA Jetson Nano

The Jetson Nano is an embedded Linux dev kit featuring a GPU accelerated processor (NVIDIA Tegra) targeted at edge AI applications. You can easily add a USB external microphone or camera - and it's fully supported by Edge Impulse. You'll be able to sample raw data, build models, and deploy trained machine learning models directly from the Studio. The Jetson Nano is available from 59 USD from a wide range of distributors, including Sparkfun, Seeed Studio.
In addition to the Jetson Nano we recommend that you also add a camera and / or a microphone. Most popular USB webcams work fine on the development board out of the box.
Powering your Jetson
Although powering your Jetson via USB is technically supported, some users report on forums that they have issues using USB power. If you have any issues such as the board resetting or becoming unresponsive, consider powering via a 5V, 4A power supply on the DC barrel connector. Don't forget to change the jumper! Here is an example power supply for sale.
An added bonus to powering via the DC barrel plug, you can carry out your first boot w/o an external monitor or keyboard.
NVIDIA Jetson Nano

1. Setting up your Jetson Nano

Depending on your hardware, follow NVIDIA's setup instructions (NVIDIA Jetson Nano Developer Kit or NVIDIA Jetson Nano 2GB Developer Kit) for both "Write Image to SD Card" and "Setup and First Boot." Do not use the latest SD card image, but rather, download the 4.5.1 version for your respective board from this page. When finished, you should have a bash prompt via the USB serial port, or using an external monitor and keyboard attached to the Jetson. You will also need to connect your Jetson to the internet via the Ethernet port (there is no WiFi on the Jetson). (After setting up the Jetson the first time via keyboard or the USB serial port, you can SSH in.)

2. Installing dependencies

Make sure your ethernet is connected to the Internet

Issue the following command to check:
ping -c 3
The result should look similar to this:
3 packets transmitted, 3 received, 0% packet loss, time 2003ms

Running the setup script

To set this device up in Edge Impulse, run the following commands (from any folder). When prompted, enter the password you created for the user on your Jetson in step 1. The entire script takes a few minutes to run (using a fast microSD card).
wget -q -O - | bash

3. Connecting to Edge Impulse

With all software set up, connect your camera or microphone to your Jetson (see 'Next steps' further on this page if you want to connect a different sensor), and run:
This will start a wizard which will ask you to log in, and choose an Edge Impulse project. If you want to switch projects run the command with --clean.

4. Verifying that your device is connected

That's all! Your device is now connected to Edge Impulse. To verify this, go to your Edge Impulse project, and click Devices. The device will be listed here.
Device connected to Edge Impulse.

Next steps: building a machine learning model

With everything set up you can now build your first machine learning model with these tutorials:
Looking to connect different sensors? Our Linux SDK lets you easily send data from any sensor and any programming language (with examples in Node.js, Python, Go and C++) into Edge Impulse.

Deploying back to device

To run your impulse locally, just connect to your Jetson again, and run:
This will automatically compile your model with full hardware acceleration, download the model to your Jetson, and then start classifying. Our Linux SDK has examples on how to integrate the model with your favourite programming language.

Image model?

If you have an image model then you can get a peek of what your device sees by being on the same network as your device, and finding the 'Want to see a feed of the camera and live classification in your browser' message in the console. Open the URL in a browser and both the camera feed and the classification are shown:
Live feed with classification results

Running models on the GPU

Due to some incompatibilities, we don't run models on the GPU by default. You can enable this by following the TensorRT instructions in the C++ SDK.


edge-impulse-linux reports "OOM killed!"

Using make -j without specifying job limits can overtax system resources, causing "OOM killed" errors, especially on resource-constrained devices this has been observed on many of our supported Linux based SBCs.
Avoid using make -j without limits. If you experience OOM errors, limit concurrent jobs. A safe practice is:
make -j`nproc`
This sets the number of jobs to your machine's available cores, balancing performance and system load.

edge-impulse-linux reports "[Error: Input buffer contains unsupported image format]"

This is probably caused by a missing dependency on libjpeg. If you run:
vips --vips-config
The end of the output should show support for file import/export with libjpeg, like so:
file import/export with libjpeg: yes (pkg-config)
image pyramid export: no
use libexif to load/save JPEG metadata: no
If you don't see jpeg support as "yes", rerun the setup script and take note of any errors.

edge-impulse-linux reports "Failed to start device monitor!"

If you encounter this error, ensure that your entire home directory is owned by you (especially the .config folder):
sudo chown -R $(whoami) $HOME

Long warm-up time and under-performance

By default, the Jetson Nano enables a number of aggressive power saving features to disable and slow down hardware that is detected to be not in use. Experience indicates that sometimes the GPU cannot power up fast enough, nor stay on long enough, to enjoy best performance. You can run a script to enable maximum performance on your Jetson Nano.
To enable maximum performance, run:
sudo nvpmodel -m 0
sudo /usr/bin/jetson_clocks
Additionally, due to Jetson Nano GPU internal architecture, running small models on it is less efficient than running larger models. E.g. the continuous gesture recognition model runs faster on Jetson Nano CPU than on GPU with TensorRT acceleration.
According to our benchmarks, running vision models and larger keyword spotting models on GPU will result in faster inference, while smaller keyword spotting models and gesture recognition models (that also includes simple fully connected NN, that can be used for analyzing other time-series data) will perform better on CPU.

Program fails to find shared library

If you see an error similar to this when running Linux C++ SDK examples with GPU acceleration,
jetson@localhost:~/example-standalone-inferencing-linux$ ./build/custom
./build/custom: error while loading shared libraries: cannot open shared object file: No such file or directory
then please download and use the SD card image version 4.6.1 for your respective board from this page. The error is likely caused by an incompatible version of nVidia's GPU libraries - or the absence of these libraries. JetPack 4.5.1 is the earliest version supported.