NEOX GA100 is the industry first RISC-V general purpose GPU for the MCU market enabling graphics, AI and compute on the same IP. It builds on Think Silicon’s hugely successful NEMA Pico GPU series.
It addresses the Graphics and AI requirements for a diverse set of vertical markets in the embedded space including wearables (smartwatches, smart glasses, hearables), smart home (thermostats, speakers, cameras, washing machines), mobility (e-bikes, e-scooters) and industrial HMI/IOT.
NEOX | GA100 is delivered with a complete suite of AI tools and libraries (NEOX AI SDK) for Tflite for MCU ecosystem. The NEOX AI SDK has been integrated into Edge Impulse platform enabling ML developers to access a complete set of ML tools to collect and shape data, perform model training and easily assess how much inference time, power consumption and RAM/Flash will be required for their specific AI use case on the NEOX® | GA100. Example use cases that would be ideal targets for NEOX GA100 include detecting health anomalies, audio processing and computer vision
Interested users can deploy their AI algorithms on the NEOX GA100 hardware by downloading a NEOX® | Bits ISO image onto the Xilinx ZU19EG FPGA development board. The ISO image includes an out-of-box software environment designed to showcase and validate the graphics and AI technology developed by Think Silicon.
For further information, please contact Think Silicon AI Application Engineering team info_ai_tsi@amat.com.
Since Think Silicon's Neox SDK has been integrated into Edge Impulse for evaluation of model performance. There is no dependencies to install to utilized this feature. You may contact Think Silicon for a copy of the SDK and inquire about physically evaluation of their devices.
Edge Impulse is platform that can collect data, process, and train many different types of models that are capable of running on the Neox GA100. Please follow the Getting Started guides to create your first project.
Many of the Think Silicon AI-SDK tools are integrated into the backend of Edge Impulse. Below we describe each tool from the AI-SDK that is used to get the best performing models you must specify your project's deployment target as Think Silicon Neox GA100 (200 MHz)
. The tools are automatically used during training and deployment if the Neox target is chosen.
When you start a project on Edge Impulse you may be asked to specify which device you intend to run on. Please choose the Think Silicon Neox GA100 (200 MHz)
option in the dialog. This will ensure that the Think Silicon simulators will be used to generate information like inferencing time, RAM, and flash information specifically for the Neox GA100.
The Graph Analyzer checks if a TFLITE graph is supported by the Neox device. This tool checks the operators of the graph and if the graph has operators not supported will give information in the log file outputs during model training and deployment.
The Model Compiler quantizes and then analyzes the model to apply optimizations to ensure the best performance for running on Neox.
In order to achieve full hardware acceleration models must be converted from their original format to run on a Neox enabled device. The Think Silicon Neox Deployment Block outputs a TFLITE graph that would be capable of running on the Neox. You may use this output with the Neox SDK to evaluate model performance on a Neox device. Please contact Think Silicon for a physical evaluation model of the Neox GA100
reComputer for Jetson series are compact edge computers built with NVIDIA advanced AI embedded systems: Jetson-10 (Nano) and Jetson-20 (Xavier NX). With rich extension modules, industrial peripherals, thermal management combined with decades of Seeed’s hardware expertise, reComputer for Jetson is ready to help you accelerate and scale the next-gen AI product emerging in diverse AI scenarios.
You can easily add a USB external microphone or camera - and it's fully supported by Edge Impulse. You'll be able to sample raw data, build models, and deploy trained machine learning models directly from the Studio. Currently, four versions have been launched. See reComputer Series Introduction web page.
This guide has only been tested with the reComputer J1020.
Product
SKU
110061362
110061361
110061363
110061401
Side View
Equipped Module
Jetson Nano 4GB
Jetson Nano 4GB
Jetson Xavier NX 8GB
Jetson Xavier NX 16GB
Operating carrier Board
J1010 Carrier Board
Jetson A206
Jetson A206
Jetson A206
Power Interface
Type-C connector
DC power adapter
DC power adapter
DC power adapter
In addition to the Jetson Nano we recommend that you also add a camera and / or a microphone. Most popular USB webcams work fine on the development board out of the box.
You will also need the following equipment to complete your first boot.
A monitor with HDMI interface. (For the A206 carrier board, a DP interface monitor can also be used.)
A set of mouse and keyboard.
An ethernet cable or an external WiFi adapter (there is no WiFi on the Jetson)
The reComputer is shipped with the an operating system burned in. Before we use it, it is required to complete some necessary configuration steps: Follow reComputer Series Getting Started web page. When completed, open a new Terminal by pressing CTRL + Alt + T. It will look as shown:
Issue the following command to check:
The result should look similar to this:
To set this device up in Edge Impulse, run the following commands (from any folder). When prompted, enter the password you created for the user on your Jetson in step 1. The entire script takes a few minutes to run (using a fast microSD card).
With all software set up, connect your camera or microphone to your Jetson (see 'Next steps' further on this page if you want to connect a different sensor), and run:
This will start a wizard which will ask you to log in, and choose an Edge Impulse project. If you want to switch projects run the command with --clean
.
That's all! Your device is now connected to Edge Impulse. To verify this, go to your Edge Impulse project, and click Devices. The device will be listed here.
With everything set up you can now build your first machine learning model with these tutorials:
Looking to connect different sensors? Our Linux SDK lets you easily send data from any sensor and any programming language (with examples in Node.js, Python, Go and C++) into Edge Impulse.
To run your impulse locally, just connect to your Jetson again, and run:
This will automatically compile your model with full hardware acceleration, download the model to your Jetson, and then start classifying. Our Linux SDK has examples on how to integrate the model with your favourite programming language.
If you have an image model then you can get a peek of what your device sees by being on the same network as your device, and finding the 'Want to see a feed of the camera and live classification in your browser' message in the console. Open the URL in a browser and both the camera feed and the classification are shown:
Due to some incompatibilities we don't run models on the GPU by default. You can enable this by following the TensorRT instructions in the C++ SDK.
Using make -j without specifying job limits can overtax system resources, causing "OOM killed" errors, especially on resource-constrained devices this has been observed on many of our supported Linux based SBCs.
Avoid using make -j without limits. If you experience OOM errors, limit concurrent jobs. A safe practice is:
This sets the number of jobs to your machine's available cores, balancing performance and system load.
This is probably caused by a missing dependency on libjpeg. If you run:
The end of the output should show support for file import/export with libjpeg, like so:
If you don't see jpeg support as "yes", rerun the setup script and take note of any errors.
If you encounter this error, ensure that your entire home directory is owned by you (especially the .config folder):
By default, the Jetson Nano enables a number of aggressive power saving features to disable and slow down hardware that is detected to be not in use. Experience indicates that sometimes the GPU cannot power up fast enough, nor stay on long enough, to enjoy best performance. You can run a script to enable maximum performance on your Jetson Nano.
ONLY DO THIS IF YOU ARE POWERING YOUR JETSON NANO FROM A DEDICATED POWER SUPPLY. DO NOT RUN THIS SCRIPT WHILE POWERING YOUR JETSON NANO THROUGH USB.
To enable maximum performance, run:
Hackster.io tutorial: Train an embedded Machine Learning model based on Edge Impulse to detect hard hat and deploy it to the reComputer J1010 for Jetson Nano.
'NVIDIA Jetson Orin' refers to the following devices:
Jetson AGX Orin Series, Jetson Orin NX Series, Jetson Orin Nano Series
'NVIDIA Jetson' refers to the following devices:
Jetson AGX Xavier Series, Jetson Xavier NX Series, Jetson TX2 Series, Jetson TX1, Jetson Nano
'Jetson' refers to all NVIDIA Jetson devices.
The NVIDIA Jetson and NVIDIA Jetson Orin devices are embedded Linux devices featuring a GPU-accelerated processor (NVIDIA Tegra) targeted at edge AI applications. You can easily add a USB external microphone or camera - and it's fully supported by Edge Impulse. You'll be able to sample raw data, build models, and deploy trained machine learning models directly from the Edge Impulse Studio.
In addition to the NVIDIA Jetson and NVIDIA Jetson Orin devices we also recommend that you add a camera and/or a microphone. Most popular USB webcams work fine on the development board out of the box.
Powering your Jetson
Although powering your Jetson via USB is technically supported, some users report on forums that they have issues using USB power. If you have any issues such as the board resetting or becoming unresponsive, consider powering via the DC barrel connector. Don't forget to change the jumper! See your target's manual for more information.
An added bonus to powering via the DC barrel plug: you can carry out your first boot w/o an external monitor or keyboard.
For example:
For NVIDIA Jetson Orin:
When finished, you should have a bash prompt via the USB serial port, or using an external monitor and keyboard attached to the Jetson. You will also need to connect your Jetson to the internet via the Ethernet port (there is no WiFi on the Jetson). (After setting up the Jetson the first time via keyboard or the USB serial port, you can SSH in.)
Issue the following command to check:
The result should look similar to this:
To set this device up in Edge Impulse, run the following commands (from any folder). When prompted, enter the password you created for the user on your Jetson/Orin in step 1. The entire script takes a few minutes to run (using a fast microSD card).
For Jetson:
For Orin:
With all software set up, connect your camera or microphone to your Jetson (see 'Next steps' further on this page if you want to connect a different sensor), and run:
This will start a wizard which will ask you to log in, and choose an Edge Impulse project. If you want to switch projects run the command with --clean
.
With everything set up you can now build your first machine learning model with these tutorials:
Choose the deployment target according to your device and JetPack version. See table below.
To run your impulse locally, just connect to your Jetson again, and run:
If you have an image model then you can get a peek of what your device sees by being on the same network as your device, and finding the 'Want to see a feed of the camera and live classification in your browser' message in the console. Open the URL in a browser and both the camera feed and the classification are shown:
Using make -j without specifying job limits can overtax system resources, causing "OOM killed" errors, especially on resource-constrained devices this has been observed on many of our supported Linux based SBCs.
Avoid using make -j without limits. If you experience OOM errors, limit concurrent jobs. A safe practice is:
This sets the number of jobs to your machine's available cores, balancing performance and system load.
This is probably caused by a missing dependency on libjpeg. If you run:
The end of the output should show support for file import/export with libjpeg, like so:
If you don't see jpeg support as "yes", rerun the setup script and take note of any errors.
If you encounter this error, ensure that your entire home directory is owned by you (especially the .config folder):
By default, the Jetson enables a number of aggressive power saving features to disable and slow down hardware that is detected to be not in use. Experience indicates that sometimes the GPU cannot power up fast enough, nor stay on long enough, to enjoy best performance. You can run a script to enable maximum performance on your Jetson.
ONLY DO THIS IF YOU ARE POWERING YOUR JETSON FROM A DEDICATED POWER SUPPLY. DO NOT RUN THIS SCRIPT WHILE POWERING YOUR JETSON THROUGH USB.
Your Jetson device device can operate in different power modes, a set of power budgets with several predefined configurations CPU and GPU frequencies and number of cores online. To enable maximum performance:
Switch to a mode with the maximum power budget and/or frequencies.
Then set the clocks to maximum.
To determine the maximum mode for your device visit the Supported Modes and Power Efficiency section in Jetson Linux Developer Guide for your L4T.
To enable maximum performance, switch to mode ID 0 and set the maximum frequencies of the clocks as follows.
For NVIDIA Jetson Xavier NX use mode ID 8
Additionally, due to Jetson GPU internal architecture, running small models on it is less efficient than running larger models. E.g. the continuous gesture recognition model runs faster on Jetson CPU than on GPU with TensorRT acceleration.
According to our benchmarks, running vision models and larger keyword spotting models on GPU will result in faster inference, while smaller keyword spotting models and gesture recognition models (that also includes simple fully connected NN, that can be used for analyzing other time-series data) will perform better on CPU.
If you see an error similar to this when running Linux C++ SDK examples with GPU acceleration,
Follow NVIDIA's setup instructions found at depending on your hardware.
use SD Card image with or
use SD Card image with
Note that you may need to update the UEFI firmware on the device when migrating to JetPack 6.0 from earlier JetPack versions. See for instructions on how to get JetPack 6.0 GA on your device.
For NVIDIA Jetson devices use SD Card image with . See also or .
That's all! Your device is now connected to Edge Impulse. To verify this, go to , and click Devices. The device will be listed here.
For more information on Docker deployment see .
Looking to connect different sensors? Our lets you easily send data from any sensor and any programming language (with examples in Node.js, Python, Go and C++) into Edge Impulse.
This will automatically compile your model with full GPU and hardware acceleration, download the model to your Jetson, and then start classifying. Our has examples on how to integrate the model with your favourite programming language.
For example for :
For :
then please download and use the SD card image version for your target see . The error is likely caused by an incompatible version of NVidia's GPU libraries - or the absence of these libraries.
4.6.4
NVIDIA Jetson (JetPack 4.6.4)
Docker container (NVIDIA Jetson - JetPack 4.6.4)
5.1.2
NVIDIA Jetson Orin (JetPack 5.1.2)
Docker container (NVIDIA Jetson Orin - JetPack 5.1.2)
6.0
NVIDIA Jetson Orin (JetPack 6.0)
Docker container (NVIDIA Jetson Orin - JetPack 6.0)