Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
The AKD1000-powered PCIe boards can be plugged into a developer’s existing linux system to unlock capabilities for a wide array of edge AI applications, including Smart City, Smart Health, Smart Home and Smart Transportation. Linux machines with the AKD1000 are supported by Edge Impulse so that you can sample raw data, build models, and deploy trained embedded machine learning models directly from the Edge Impulse studio to create the next generation of low-power, high-performance ML applications.
To learn more about BrainChip technology please visit BrainChip's website: https://brainchip.com/products/
To enable this device for Edge Impulse deployments you must install the following dependencies on your Linux target that has an Akida PCIe board attached.
Python 3.8: Python 3.8 is required for deployments via the [Edge Impulse CL/tools/edge-impulse-for-linux/README.md.md) or AKD1000 deployment blocks because the binary file that is generated is reliant on specific paths generated for the combination of Python 3.8 and Python Akida™ Library 2.3.3 installations. Alternatively, if you intend to write your own code with the Python Akida™ Library or the Edge Impulse SDK via the BrainChip MetaTF Deployment Block option you may use Python 3.7 - 3.10.
Python Akida™ Library 2.3.3: A python package for quick and easy model development, testing, simulation, and deployment for BrainChip devices
Akida™ PCIe drivers: This will build and install the driver on your system to communicate with the above AKD1000 reference PCIe board
Edge Impulse Linux: This will enable you to connect your development system directly to Edge Impulse Studio
With all software set up, connect your camera or microphone to your operating system and run:
This will start a wizard which will ask you to log in, and choose an Edge Impulse project. If you want to switch projects run the command with --clean
.
That's all! Your machine is now connected to Edge Impulse. To verify this, go to your Edge Impulse project, and click Devices. The device will be listed here.
After adding data via Data acquisition starting an Impulse Design you can add BrainChip Akida™ Learning Block. The type of Learning Blocks visible depend on the type of data collected. Using BrainChip Akida™ Learning Blocks will ensure that models generated for deployment will be compatible with BrainChip Akida™ devices.
In the Learning Block of the Impulse Design one can compare between Float, Quantized, and Akida™ versions of a model. If you added a Processing Block to your Impulse Design you will need to generate features before you can train your model. If the project uses a transfer learning block you may be able to select a base model from BrainChip’s Model zoo to transfer learn from. More models will be available in the future, but if you have a specific request please let us know via the Edge Impulse forums.
In order to achieve full hardware acceleration models must be converted from their original format to run on an AKD1000. This can be done by selecting the BrainChip MetaTF Block from the Deployment Screen. This will generate a .zip file with models that can be used in your application for the AKD1000. The block uses the CNN2SNN toolkit to convert quantized models to SNN models compatible for the AKD1000. One can then develop an application using the Akida™ python package that will call the Akida™ formatted model found inside the .zip file.
Alternatively, you can use the AKD1000 Block to generate a pre-built binary that can be used by the Edge Impulse Linux CLI to run on your Linux installation with a AKD1000 Mini PCIe present.
The output from this Block is an .eim file that, once saved onto the computer containing the AKD1000, can be run with the following command:
Alternatively one can use CLI to build, download, and run the model on your x86 or aarch64 devices with this command format
The AKD1000 has a unique ability to conduct training on the edge device. This means that new classification features can be added or completely replace the existing classes in a model. A model must be specifically configured and compiled with MetaTF to access the ability of the AKD1000. To enable the Edge Learning features in Edge Impulse Studio please follow these steps:
Select a BrainChip Akida™ Learning Block in your Impulse design
In the Impulse design of the learning block, enable Create Edge Learning model under Akida Edge Learning options
Set the Additional classes and Number of neurons for each class and train the model. For more information about these parameters please visit BrainChip's documentation of the parameters. Note that Edge Learning compatible models require a specific setup for the feature extractor and classification head of the model. You can view how a model is configured by switching to Keras (expert) mode in the Neural Network settings and searching for "Feature Extractor" and "Build edge learning compatible model" comments in the Keras code.
Once the model is trained you may download the Edge Learning compatible model from either the project's Dashboard or the BrainChip MetaTF Model deployment block.
A public project with Edge Learning options is available in the Public Projects section of this documentation. To learn more about BrainChip's Edge Learning features and to find examples of its usage please visit BrainChip's documentation for Edge Learning.
We have multiple projects that are available to clone immediately to quickly train and deploy models for the AKD1000.
If you have an image model then you can get a peek of what your device sees by being on the same network as your device, and finding the 'Want to see a feed of the camera and live classification in your browser' message in the console. Open the URL in a browser and both the camera feed and the classification are shown:
akida
library)It is mainly related to initialization of the Akida™ NSoC and model and is could be caused by lack of Akida Python libraries. Please check if you have an Akida™ Python library installed:
Example output:
If you don't have the library (WARNING: Package(s) not found: akida
) then install it:
If you have the library, then check if the EIM artifact is looking for the library in the correct place. First, download your EIM model using Edge Impulse Linux CLI tools:
Then run the EIM model with debug
option:
Now check if your Location
directory from pip show akida
command is listed in your sys.path
output. If not (usually it happens if you are using Python virtual environments), then export PYTHONPATH
:
And try to run the model with edge-impulse-linux-runner
once again.
If the previous step didn't help, try to get additional debug data. With your EIM model downloaded, open one terminal window and do:
Then in another terminal:
This should give you additional info in the first terminal about the possible root of your issue.
This error could mean that your camera is in use by another process. Check if you don't have any application open that is using the camera. This error could all exists when your previous attempt to run edge-impulse-linux-runner
failed with exception. In that case, check if you have a gst-launch-1.0
process running. For example:
In this case, the first number (here 5615
) is a process ID. Kill the process:
And try to run the model with edge-impulse-linux-runner
once again.
The Renesas RZ/V2L is a state-of-the-art general-purpose 64-bit Linux MPU with a dual-core ARM Cortex-A55 processor running at 1.2GHz and ARM Mali-G31 3D graphic engine.
The RZ/V2L EVK consists of a SMARC SOM module and an I/O carrier board that provides a USB serial interface, 2 channel Ethernet interfaces, a camera and an HDMI display interface, in addition to many other interfaces (PMOD, microphone, audio output, etc.). The RZ/V2L EVK can be acquired directly through the Renesas website. Since the RZ/V2L is intended for vision AI, the EVK already contains the .
The Renesas RZ/V2L board realizes hardware acceleration through the DRP-AI IP that consists of a Dynamically Configurable Processor (DRP), and Multiply and Accumulate unit (AI-MAC). The DRP-AI IP is designed to process the entire neural network plus the required pre- and post-processing steps. Additional optimization techniques reduce power consumption and increase processing performance. This leads to high power efficiency and allows using the MPU without a heat sink.
The Renesas tool “DRP-AI TVM” is used to translate machine learning models and optimize the processing for DRP-AI. The tool is fully supported by Edge Impulse. This means that machine learning models downloaded from the studio can be directly deployed to the RZ/V2L board.
Once downloaded and extracted, you need to download the patch file (RZG2L_VLP306u1_switch_to_nodejs_18.17.1.patch - coming soon!) and place the patch file into the same directory. (NOTE: Yes, the patch was initially developed for the G2L board... but equally works with the V2L boards given the specific files versions referenced above).
NOTE: The current patch will have 3 rejections (gcc13.patch, nodejs_12.22.12.bb, nodejs_14.18.1.bb) all of which can be safely ignored.
After putting all of these files into a single directory + patch file, you will need to create and patch your V2L yocto build environment as follows (this can be exported into a script that can be run):
You can then invoke your V2L yocto build process via:
screen
The easiest way is to connect through serial to the RZ/V2L board using the USB mini b port.
After connecting the board with a USB-C cable, please power the board with the red power button.
Please install screen
to the host machine and then execute the following command from Linux to access the board:
You will see the boot process, then you will be asked to log in:
Log in with username root
There is no password
Note that, it should be possible to use an Ethernet cable and log in via SSH if the daemon is installed on the image. However, for simplicity purposes, we do not refer to this one here.
Once you have logged in to the board, please run the following command to install Edge Impulse Linux CLI
With all software set up, connect your google coral camera to your Renesas board (see 'Next steps' further on this page if you want to connect a different sensor), and run:
This will start a wizard which will ask you to log in and choose an Edge Impulse project. If you want to switch projects run the command with --clean
.
Currently, all Edge Impulse models can run on the RZ/V2L CPU which is a dedicated Cortex A55. In addition, you can bring your own model to Edge Impulse and use it on the device. However, if you would like to benefit from the DRP-AI hardware acceleration support including higher performance and power efficiency, please use one of the following models:
For object detection:
Yolov5 (v5)
FOMO (Faster objects More Objects)
For Image classification:
MobileNet v1, v2
It supports as well models built within the studio using the available layers on the training page.
Note that, on the training page you have to select the target before starting the training in order to tell the studio that you are training the model for the RZ/V2L. This can be done on the top right in the training page.
If you would like to do object detection with Yolov5 (v5) you need to fix the image resolution in the impulse design to 320x320, otherwise, you might risk that the training fails.
With everything set up you can now build your first machine learning model with these tutorials:
If you are interested in using the EON tuner in order to improve the accuracy of the model this is possible only for image classification for now. EON tuner supports for object detection is arriving soon.
If you use the EON tuner with image classification, you need to filter the int8
models since they are not supported by the DRP-AI. Also, you need to filter the grayscale models as well. Note that if you leave the EON tuner page, the filter will reset to the default settings, which means you need to re-filter the above models.
To run your impulse locally, just connect to your Renesas RZ/V2L and run:
This will automatically compile your model with full hardware acceleration and download the model to your Renesas board, and then start classifying.
Or you can select the RZ/V2L board from the deployment page, this will download an eim
model that you can use with the above runner as follows:
Go to the deployment page and select:
Then run the following on the RZ/V2L:
You will see the model inferencing results in the terminal also we stream the results to the local network. This allows you to see the output of the model in real-time in your web browser. Open the URL shown when you start the runner
and you will see both the camera feed and the classification results.
Since the RZ/V2L benefits from hardware acceleration using the DRP-AI, we provide you with the drp-ai
library that uses our C++ Edge Impulse SDK and models headers that run on the hardware accelerator. If you would like to integrate the model source code into your applications and benefit from the drp-ai
then you need to select the drp-ai
library.
The is a power efficient, vision-AI accelerated development board in popular single board computer format with well supported expansion interfaces. This Renesas RZ/V2L processor-based platform is ideal for development of cost-efficient Vision-AI and a range of energy-efficient Edge AI applications. It’s RZ/V2L processor has two 1.2GHz Arm® Cortex®-A55 cores plus a 200MHz Cortex-M33 core, a MALI 3D GPU and Image Scaling Unit. This processor SoC further differentiates itself with an on-chip DRP-AI accelerator plus H.264 video (1920 x 1080) encode/decode function in silicon, making it ideal for implementing cost-effective embedded-vision applications.
RZBoard V2L is engineered in a compact Raspberry Pi form-factor with a versatile set of expansion interfaces, including Gigabit Ethernet, 801.11ac Wi-Fi and Bluetooth 5, two USB 2.0 host and a USB 2.0 OTG interface, MIPI DSI and CSI camera interfaces, CANFD interface, Pi-HAT compatible 40-pin expansion header and Click Shuttle expansion header.
The board supports analog audio applications via it’s audio codec and stereo headphone jack. It also pins-out five 12bit ADC inputs for interfacing with analog sensors. 5V input power is sourced via a USB-C connector and managed via a single-chip Renesas RAA215300 PMIC device.
Onboard memory includes 2GB DDR4, 32GB eMMC and 16MB QSPI flash memory, plus microSD slot for removable media.
Software enablement includes CIP Kernel based Linux BSP (maintained for 10 years+) plus reference designs that highlight efficient vision AI implementations using the DRP-AI core. Onboard 10-pin JTAG/SWD mini-header and 4-pin UART header enable the use of an external debugger and USB-serial cable.
Available accessory options include a MIPI 7-inch display, MIPI CSI camera and 5V/3A USB Type C power supply.
The MemryX MX3 is the latest state-of-the-art AI inference co-processor for running trained Computer Vision (CV) neural network models built using any of the major AI frameworks (TensorFlow, TensorFlow Lite, ONNX, PyTorch, Keras) and offering the widest operator support. Running alongside any Host Processor, the MX-3 offloads CV inferencing tasks providing power savings, latency reduction, and high accuracy. The MX-3 can be cascaded together optimizing performance based on the model being run. The MX3 Evaluation Board (EVB) consists of PCBA with 4 MX3’s installed. Multiple EVBs can be cascaded using a single interface cable.
MemryX Developer Hub provides simple 1-click compilation. This portal is intuitive and easy-to-use and includes many tools, such as a Simulator, Python and C++ APIs, and example code. Contact to request an EVB and access to the online Developer Hub.
To use MemryX devices with Edge Impulse deployments you must install the following dependencies on your Linux target that has a MemryX device installed.
For Debian based Linux devices you may need to install using the following command. For other Linux distributions please use the necessary package manager for installation.
Ubuntu/Debian x86:
After working through a getting started tutorial, and with all software set up, connect your camera or microphone to your operating system and run:
This will start a wizard which will ask you to log in, and choose an Edge Impulse project. If you want to switch projects run the command with --clean
.
Need sudo?
Some commands require the use of sudo
in order to have proper access to a connected camera. If your edge-impulse-linux
or edge-impulse-linux-runner
command fails to enumerate your camera please try the command again with sudo
No need to int8 model
MemryX devices only need a float32 model to be passed into the compiler. Therefore, when developing models with Edge Impulse it is better not to profile int8 (quantize) models. You may prevent generation, profiling, and deployment of int8 models by deselecting Profile int8 model under the Advanced training settings of your Impulse Design model training section.
The implementation of MemryX MX3 devices into the Edge Impulse SDK uses synchronous calls to the evaluation board. Therefore, frame per second information is relative to that API. For faster performance there is an asynchronous API from MemryX that may be used in place for high performance applications. Please contact Edge Impulse Support for assistance in getting the best performance out of your MemryX MX3 devices!
In order to achieve full hardware acceleration models must be converted from their original format to run on the MX3. This can be done by selecting the MX3 from the Deployment Screen. This will generate a .zip file with models that can be used in your application for the MX3. The block uses the MemryX compiler so that the model will run accelerated on the device.
The MemryX Dataflow Program Deployment Block generates a .zip file that contains the converted Edge Impulse model usable by MX3 devices (.dfp file). One can use the MemryX SDK to develop applications using this file.
The output from this Block is an Edge Impulse .eim file that, once saved onto the computer with the MX3 connected, can be run with the following command.
Running this command will ensure that the model runs accelerated on the MemryX MX3 device.
Need sudo?
Some commands require the use of sudo
in order to have proper access to a connected camera. If your edge-impulse-linux
or edge-impulse-linux-runner
command fails to enumerate your camera please try the command again with sudo
If you have an image model then you can get a peek of what your device sees by being on the same network as your device, and finding the 'Want to see a feed of the camera and live classification in your browser' message in the console. Open the URL in a browser and both the camera feed and the classification are shown:
Here are great getting started projects that you can clone to test on your device!
Please restart your MX3 evaluation board by using the reset button. Then use the edge-impulse-linux-runner
command again. If you are still having issues please contact Edge Impulse support.
You may need to use sudo edge-impulse-linux-runner
to be able to access the camera on your system.
Ensure that you do not have any open processes still using the camera. For example, if you have the Edge Impulse web browser image acquisition page open or a virtual meeting software, please close or disable the camera usage in those applications.
This error could mean that your camera is in use by another process. Check if you have any application open that is using the camera. This error could exist when your previous attempt to run edge-impulse-linux-runner
failed with exception. In that case, check if you have a gst-launch-1.0
process running. For example:
In this case, the first number (here 5615
) is a process ID. Kill the process:
And try to run the model with edge-impulse-linux-runner
once again.
If the previous step didn't help, try to get additional debug data. With your EIM model downloaded, open one terminal window and do:
Then in another terminal:
This should give you additional info in the first terminal about the possible root of your issue. Contact Edge Impulse Support with the results.
The is built around TI's AM62A AI vision processor, which includes an image signal processor (ISP) supporting up to 5 MP at 60 fps, a 2 teraoperations per second (TOPS) AI accelerator, a quad-core 64-bit Arm® Cortex®-A53 microprocessor, a single-core Arm Cortex-R5F and an H.264/H.265 video encode/decode. SK-AM62A-LP is an ideal choice for those looking to develop low-power smart camera, dashcam, machine-vision camera and automotive front-camera applications.
In order to take full advantage of the AM62A's AI hardware acceleration Edge Impulse has integrated and AM62A optimized for low-to-no-code training and deployments from Edge Impulse Studio.
Edge Impulse supports PSDK 8.06.
To set this device up in Edge Impulse, run the following commands on the SK-AM62A-LP:
With all software set up, connect your camera or microphone to your operating system (see 'Next steps' further on this page if you want to connect a different sensor), and run:
This will start a wizard which will ask you to log in, and choose an Edge Impulse project. If you want to switch projects run the command with --clean
.
With everything set up you can now build your first machine learning model with these tutorials:
To run your impulse locally run on your Linux platform:
If you have an image model then you can get a peek of what your device sees by being on the same network as your device, and finding the 'Want to see a feed of the camera and live classification in your browser' message in the console. Open the URL in a browser and both the camera feed and the classification are shown:
Some of these projects were first developed for the TDA4VM, but will run on the AM62A as well!
The following guides will work for the SK-AM62A-LP as well:
(MW-V2L-E32G-D2G-I-WX-V0) is built around the Renesas RZ/V2L, offering the same capabilities as the RZ/G2L but with a power efficient NPU, making it suitable for low power object detection and classification. The MistySOM-V2L is built from the ground up to enable battery powered computer vision. It is ruggedized for industrial temperatures and offers long term (10 year) firmware support via a CIP kernel based Linux BSP. Available separately is the MistyCarrier (MW-V2L-G2L-I-WWB-V0) board, providing a platform that allows easy accessibility to a variety of interfaces.
The NPU of the MistySOM-V2L enables Jetson Nano-like performance for embedded video applications while using 50% less power, and supports multiple AI frameworks (ONNX, PyTorch, TensorFlow, etc), with the ability to offload processing to the CPU if required.
The MistySOM-V2L is capable of running some versions of YOLO at >20FPS without a heatsink, and images and video can be captured through the 4 lane MIPI-CSI interface and with the onboard codec efficiently h.264 encoded. It includes a dual core Cortex-A55 and a single core Cortex-M33 CPU.
The is a Linux enabled development kit from Texas Instruments with a focus on smart cameras, robots, and ADAS that need multiple connectivity options and ML acceleration. The has 8 TOPS of hardware-accelerated AI combined with low power capabilities to make this device capable of many applications.
In order to take full advantage of the TDA4VM's AI hardware acceleration Edge Impulse has integrated and TDA4VM optimized for low-to-no-code training and deployments from Edge Impulse Studio.
Edge Impulse supports PSDK 8.06.
To set this device up in Edge Impulse, run the following commands on the SK-TDA4VM:
With all software set up, connect your camera or microphone to your operating system (see 'Next steps' further on this page if you want to connect a different sensor), and run:
This will start a wizard which will ask you to log in, and choose an Edge Impulse project. If you want to switch projects run the command with --clean
.
With everything set up you can now build your first machine learning model with these tutorials:
To run your impulse locally run on your Linux platform:
If you have an image model then you can get a peek of what your device sees by being on the same network as your device, and finding the 'Want to see a feed of the camera and live classification in your browser' message in the console. Open the URL in a browser and both the camera feed and the classification are shown:
Note that, the DRP-AI is designed for feed-forward neural networks that are usually in vision-based architectures. For more information about the DRP-AI, please .
For more technical information about RZ/V2L, please .
Renesas provides Yocto build system to build all the necessary packages and create the Linux image. The Renesas documentation calls out that the build system must be based off of Ubuntu 20.04. The following instructions outline the necessary steps to setup your build environment.
In order to use the Edge Impulse CLI tools, NodeJS v18 needs to be installed into the yocto image that you build. Given the instructions called out , once the following files are downloaded from Renesas (specific versions specified are required):
In addition to the above files, you also need to download the DRP-AI
package from Renesas' website as well. Please consult this for the software download link. Thus, all of the files needed for the build are:
Renesas documentation then shows you different build options + how to flash your compiled images onto your V2L board. Once your build completes, your files that will be used in those subsequent instructions called out to flash your V2L board can be found here:
That's all! Your device is now connected to Edge Impulse. To verify this, go to , and click Devices. The device will be listed here.
.
.
We have an example showing how to use the drp-ai
library that can be found in .
Please visit AVNET's RZBoard . For succinct documentation to create a board image please visit .
: This is a prerequisite for the MemryX SDK.
MemryX tools and drivers: Please contact for access to their tools and drivers
: This will enable you to connect your development system directly to Edge Impulse Studio
Before working directly with MemryX devices it is recommended you through an so that the Edge Impulse workflow is understood.
Your machine is now connected to Edge Impulse. To verify this, go to , and click Devices. The device will be listed here.
.
First, one needs to follow the to install the Linux distribution to the SD card of the device.
That's all! Your machine is now connected to Edge Impulse. To verify this, go to , and click Devices. The device will be listed here.
.
Looking to connect different sensors? Our lets you easily send data from any sensor and any programming language (with examples in Node.js, Python, Go and C++) into Edge Impulse.
This will automatically compile your model with full hardware acceleration, download the model to your local machine, and then start classifying. Our has examples on how to integrate the model with your favourite programming language.
Texas Instruments provides models that are optimized to run on the AM62A. Those that have Edge Impulse support are found in the links below. Each Github repository has instructions on installation to your Edge Impulse project. The original source of these optimized models are found at .
Please visit the to find related documentation on how to use MistySOM with Edge Impulse. You can also contact MistyWest directly at for a support representative. If the above wiki link requests a certificate when navigating to the site, just click Cancel and the wiki will load correctly.
First, one needs to follow the to install the Linux distribution to the SD card of the device.
That's all! Your machine is now connected to Edge Impulse. To verify this, go to , and click Devices. The device will be listed here.
Looking to connect different sensors? Our lets you easily send data from any sensor and any programming language (with examples in Node.js, Python, Go and C++) into Edge Impulse.
This will automatically compile your model with full hardware acceleration, download the model to your local machine, and then start classifying. Our has examples on how to integrate the model with your favourite programming language.
Texas Instruments provides models that are optimized to run on the TDA4VM. Those that have Edge Impulse support are found in the links below. Each Github repository has instructions on installation to your Edge Impulse project. The original source of these optimized models are found at .
The The SK-AM68 Starter Kit/Evaluation Module (EVM) is based on the AM68x vision SoC which includes an image signal processor (ISP) supporting up to 480MP/s, an 8 tera-operations-per-second (TOPS) AI accelerator, two 64-bit Arm® Cortex®-A72 CPUs, and support for H.264/H.265 video encode/decode. The SK-AM68x is an ideal choice for machine vision, traffic monitoring, retail automation, and factory automation.
In order to take full advantage of the AM68A's AI hardware acceleration Edge Impulse has integrated TI Deep Learning Library and AM68A optimized Texas Instruments EdgeAI Model Zoo for low-to-no-code training and deployments from Edge Impulse Studio.
First, one needs to follow the AM68A Quick Start Guide to install the Linux distribution to the SD card of the device.
Edge Impulse supports PSDK 8.06.
To set this device up in Edge Impulse, run the following commands on the SK-AM68A:
With all software set up, connect your camera or microphone to your operating system (see 'Next steps' further on this page if you want to connect a different sensor), and run:
This will start a wizard which will ask you to log in, and choose an Edge Impulse project. If you want to switch projects run the command with --clean
.
That's all! Your machine is now connected to Edge Impulse. To verify this, go to your Edge Impulse project, and click Devices. The device will be listed here.
With everything set up you can now build your first machine learning model with these tutorials:
Looking to connect different sensors? Our Linux SDK lets you easily send data from any sensor and any programming language (with examples in Node.js, Python, Go and C++) into Edge Impulse.
To run your impulse locally run on your Linux platform:
This will automatically compile your model with full hardware acceleration, download the model to your local machine, and then start classifying. Our Linux SDK has examples on how to integrate the model with your favourite programming language.
If you have an image model then you can get a peek of what your device sees by being on the same network as your device, and finding the 'Want to see a feed of the camera and live classification in your browser' message in the console. Open the URL in a browser and both the camera feed and the classification are shown:
Some of these projects were first developed for the TDA4VM, but will run on the AM68A as well!
Texas Instruments provides models that are optimized to run on the AM68A. Those that have Edge Impulse support are found in the links below. Each Github repository has instructions on installation to your Edge Impulse project. The original source of these optimized models are found at Texas Instruments EdgeAI Model Zoo.
Texas Instruments MobileNetV2+SSDLite - Please contact Edge Impulse Support
Texas Instruments RegNetX800MF+FPN+SSDLite - Please contact Edge Impulse Support
Texas Instruments YOLOV5 - Please contact Edge Impulse Support
The following guides will work for the SK-AM68A as well:
The NXP I.MX 8M Plus is a popular SoC found in many single board computers, development kits, and finished products. When prototyping, many users turn to the official NXP Evaluation Kit for the i.MX 8M Plus, known simply as the i.MX 8M Plus EVK. The board contains many of the ports, connections, and external components needed to verify hardware and software functionality. The board can also be used with Edge Impulse, to run machine learning workloads on the edge.
The board contains:
4x Arm® Cortex-A53 up to 1.8 GHz
1x Arm® Cortex-M7 up to 800 MHz
Cadence® Tensilica® HiFi4 DSP up to 800 MHz
Neural Processing Unit
Special Note: The NPU is not currently used by Edge Impulse by default, but CPU-inferencing alone is adequate in most situations. The NPU can be leveraged however, if you export the Tensorflow from your Block Output after your model has been trained, by following these instructions: Edge Impulse Studio -> Dashboard -> Block outputs. Once download, you can build an application, or use Python, to run the model accelerated via the i.MX8's NPU.
6 GB LPDDR4
32 GB eMMC 5.1
i.MX 8M Plus CPU module
Base board
USB 3.0 to Type C cable.
USB A to micro B cable
USB Type C power supply
In addition to the i.MX 8M Plus EVK we recommend that you also add a camera and / or a microphone. Most popular USB webcams work fine on the development board out of the box.
A few steps need to be performed to get your board ready for use.
You will also need the following equipment to complete your first boot.
Monitor
Mouse and keyboard
Ethernet cable or WiFi
NXP provides a ready-made operating system based on Yocto Linux, that can be downloaded from the NXP website. However, we'll need a Debian or Ubuntu-based image for Edge Impulse purposes, so you'll have to run an OS build and come away with a file that can be flashed to an SD Card and then booted up. The instructions for building the Ubuntu-derived OS for the board are located here: https://github.com/nxp-imx/meta-nxp-desktop
Follow the instructions, and once you have an image built, flash it to an SD Card, insert into the i.MX 8M Plus EVK, and power on the board.
Once booted up, open up a Terminal on the device, and run the following commands:
You may need to reboot the board once the dependencies have finished installing. Once rebooted, run:
This will start a wizard which will ask you to log in, and choose an Edge Impulse project. If you want to switch projects run the command with --clean
.
That's all! Your i.MX 8M Plus EVK is now connected to Edge Impulse. To verify this, go to your Edge Impulse project, and click Devices. The device will be listed here.
With everything set up you can now build your first machine learning model with these tutorials:
Looking to connect different sensors? Our Linux SDK lets you easily send data from any sensor and any programming language (with examples in Node.js, Python, Go and C++) into Edge Impulse.
To run your impulse locally on the i.MX 8M Plus EVK, open up a terminal and run:
This will automatically compile your model, download the model to your i.MX 8M Plus EVK, and then start classifying. Our Linux SDK has examples on how to integrate the model with your favourite programming language.
If you have an image model then you can get a peek of what your i.MX 8M Plus EVK sees by being on the same network as your device, and finding the 'Want to see a feed of the camera and live classification in your browser' message in the console. Open the URL in a browser and both the camera feed and the classification are shown:
The i.MX 8M Plus EVK is a fully-featured development kit, making it a great option for machine learning on the edge. With it's Ubuntu-based OS flashed, it is capable of both collecting data, as well as running local inference with Edge Impulse.
If you have any questions, be sure to reach out to us on our Forums!