‘NVIDIA Jetson Orin’ refers to the following devices:The NVIDIA Jetson and NVIDIA Jetson Orin devices are embedded Linux devices featuring a GPU-accelerated processor (NVIDIA Tegra) targeted at edge AI applications. You can easily add a USB external microphone or camera - and it’s fully supported by Edge Impulse. You’ll be able to sample raw data, build models, and deploy trained machine learning models directly from the Edge Impulse Studio. In addition to the NVIDIA Jetson and NVIDIA Jetson Orin devices we also recommend that you add a camera and/or a microphone. Most popular USB webcams work fine on the development board out of the box.‘NVIDIA Jetson’ refers to the following devices:
- Jetson AGX Orin Series, Jetson Orin NX Series, Jetson Orin Nano Series
‘Jetson’ refers to all NVIDIA Jetson devices.
- Jetson AGX Xavier Series, Jetson Xavier NX Series, Jetson TX2 Series, Jetson TX1, Jetson Nano
NVIDIA Jetson Orin
Note that you may need to update the UEFI firmware on the device when migrating to JetPack 6.0 from earlier JetPack versions. See NVIDIA’s Initial Setup Guide for Jetson Nano Development Kit for instructions on how to get JetPack 6.0 GA on your device.For NVIDIA Jetson devices use SD Card image with Jetpack 4.6.4. See also JetPack Archive or Jetson Download Center. When finished, you should have a bash prompt via the USB serial port, or using an external monitor and keyboard attached to the Jetson. You will also need to connect your Jetson to the internet via the Ethernet port (there is no WiFi on the Jetson). (After setting up the Jetson the first time via keyboard or the USB serial port, you can SSH in.)
--clean
.
Device connected to Edge Impulse.
JetPack version | EIM Deployment | Docker Deployment |
---|---|---|
4.6.4 | NVIDIA Jetson (JetPack 4.6.4) | Docker container (NVIDIA Jetson - JetPack 4.6.4) |
5.1.2 | NVIDIA Jetson Orin (JetPack 5.1.2) | Docker container (NVIDIA Jetson Orin - JetPack 5.1.2) |
6.0 | NVIDIA Jetson Orin (JetPack 6.0) | Docker container (NVIDIA Jetson Orin - JetPack 6.0) |
Live feed with classification results
For NVIDIA Jetson Xavier NX use mode ID 8Additionally, due to Jetson GPU internal architecture, running small models on it is less efficient than running larger models. E.g. the continuous gesture recognition model runs faster on Jetson CPU than on GPU with TensorRT acceleration. According to our benchmarks, running vision models and larger keyword spotting models on GPU will result in faster inference, while smaller keyword spotting models and gesture recognition models (that also includes simple fully connected NN, that can be used for analyzing other time-series data) will perform better on CPU.