Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Industrial device monitoring and predictive maintenance are becoming crucial aspects of industries relying on heavy machinery and equipment. Predictive maintenance, in particular, has emerged as a critical approach to optimizing maintenance strategies and minimizing costly equipment failures. By leveraging IoT sensors and machine learning on the edge, the landscape of predictive maintenance has undergone significant transformation. This shift allows for real-time data collection, analysis, and decision-making at the edge, enabling faster response times and proactive maintenance actions. Unlike traditional predictive maintenance techniques, which often rely on periodic inspections or time-based schedules, this new framework takes advantage of continuous monitoring, anomaly detection, and predictive analytics to anticipate and even prevent equipment failures, resulting in increased operational efficiency, reduced downtime, and significant cost savings.
The market for IoT devices dedicated to predictive maintenance faces a gap of efficient devices that are able to collect multiple streams of sensor data, enable efficient data analysis and incorporate decision-making capabilities all in one. Edge Impulse has partnered with ReLoc to design our first industrial reference design device - the BrickML. BrickML is small form-factor device powered by a Renesas RA6M5, designed specifically to operate in industrial environments.
The RA6M5 comes equipped with a 32-bit 200MHz Arm Cortex-M33 microcontroller, 2MB flash memory, 8kB data flash to store data as in EEPROM and 512kB SRAM, making it an extremely powerful device for real-time data processing. Complete integration with the Edge Impulse ecosystem enables machine learning based smart decision making at the tips of users' fingers. The BrickML comes encased in a protective enclosure that allows for installation in various industrial conditions with little effort. Loaded with sensors - microphone, inertial, environmental (temperature and humidity) - BrickML makes monitoring of equipment as closely as possible extremely simple. In addition, expanded ADC functionality allows the BrickML to be used in conjunction with a non-invasive current sensor which can be used to carry out motor current signal analyses (MCSA) on various equipment. The BrickML comes pre-loaded with a motion detection model.
The firmware for the BrickML is open source and hosted on GitHub: edgeimpulse/brickml
To start using the BrickML with the Edge Impulse studio, no additional software is required. Simply install the Edge Impulse CLI, create a project on the Edge Impulse Studio, and you're ready to go.
Problems installing the CLI?
See the Installation and troubleshooting guide.
Create a new project in the studio or clone one of the public projects available in our projects library. The project will be empty i.e. no pre-existing dataset will be available whereas data will be present in a cloned project. Neither will have any devices associated with it, which you can check under the devices tab.
Connect the BrickML to your computer and start the Edge Impulse daemon from a command prompt or terminal:
This starts a wizard which asks you to log in and choose an Edge Impulse project. If you want to switch projects run the command with --clean
.
If prompted to select a device, choose BRICKML
:
Alternatively, recent versions of Google Chrome and Microsoft Edge can collect data directly from your development board, without the need for the Edge Impulse CLI. See this blog post for more information.
Once logged in, the wizard will ask which project the device should be connected to. From this list, choose the project that you created in step one.
After the project is selected, the daemon will update to let you know that the connection is successful. Enter a name for your device at the prompt, and your device is now connected to the studio. The devices tab in your project on the studio will also indicate successful connection of the BrickML with a green indicator. You can now start collecting your data.\
Note: Docker build can be done with MacOs, Windows10 & Windows11 and Linux machines with x86_64 architecture only.
If you are building with Docker, you will need to have Docker Desktop installed. You will need to do this is you want to build a wrapper application around your BrickML project while taking advantage of the Edge Impulse provided ingestion and inference libraries.
Run the Docker Desktop executable, or start the docker daemon from a terminal as shown below:
From the BrickML firmware directory build the docker container
Build the firmware as follows and flash your device with your application (as described below)
The default firmware for the BrickML is available here. To update the firmware or return to the default version with which the BrickML is shipped, use the provided ei_uploader.py
script as follows:
The -f parameter is optional and is assigned to the filename firmware-brickml.bin.signed
by default.
The data sheet for the BrickML can be found here:
To enter in bootloader mode, keep the button on the BrickML pressed while powering on the device.
With everything set up you can now build your first machine learning model with these tutorials:
Looking to connect different sensors? The data forwarder lets you easily send data from any sensor into Edge Impulse.
Predictive maintenance powered by IoT sensors and machine learning on the edge, has become a game-changer, empowering businesses to embrace a more proactive and precise approach to asset management. BrickML is an all-in-one approach to predictive maintenance, empowering organizations with accurate insights and enabling proactive asset management.
The MCS AI Gateway 4434S adds local artificial intelligence (Artificial Intelligence = AI) and sends the optimized data via 4G/LTE to your existing (alarm or IoT) application platform and includes major functionalities:
Local Artificial Intelligence (EDGE AI). The AI data algorithm is processed in the Gateway. This prevents high data flows to the cloud. Flexible adjustment will be possible by updating new data in the AI client running in the gateway.
Mobile Router Features. This is not just a gateway. It is a high-end industrial router with routing, security, VPN and mobile LTE (dual SIM) functionalities.
Easy Configuration and Deployment. (Remote) web server configuration of router and AI functionalities.
Remote Management. Firmware and AI client updates can easily be performed 'over the air'. Network diagnostics and other useful tools help you with the rollout.
Protocols in & out. Protocols in (to camera, such as .jpg, .mjpeg, rtsp, H264, H265) and protocols out (to application platform, such as REST, MQTT, JSON) are provided and configurable.
Thanks to work done by Edge Impulse partner Scailable, the MCS AI Gateway 4434S with ICR-V3 (arm v7) or V4 is seamlessly integrated for vision-based model deployments from the Edge Impulse Studio via the Scailable Cloud Platform.
For detailed instructions on setting up your device with the Scailable AI Manager, see these tutorials:
For a end-to-end guide on integrating your Edge Impulse model to the Scailable Cloud Platform, and deploying your Edge Impulse model to your product device, see these tutorials:
Advantech ICAM-540 series is a highly integrated Industrial AI Camera equipped with SONY IMX334 industrial grade image sensor, based on NVIDIA Orin NX SoM with support for C-mount lens. Featuring CAMNavi SDK, Google Chromium web browser utility and NVIDIA Deepstream SDK, ICAM-540 series accelerates the development and deployment of cloud-to-edge vision AI applications.
The CAMNavi SDK uses Python language by default and is better adapted to image acquisition and AI algorithm integration. Meanwhile the HTML 5 web based utility can be used to setup the cameras and network configuration to lower the installation effort.
The preloaded, optimized Jetpack board support package allows to seamlessly connect to AI cloud services. Advantech ICAM-540 series is an all-in-one, compact and rugged industrial AI camera and is ideal for a variety of Edge AI vision applications.
Follow Advantech's setup instructions to power on and setup ICAM-540. You may also need to purchase a camera lens that is appropriate for you application. You will need to connect power, keyboard, mouse, HDMI monitor, and the Ethernet connector. Please log into the Ubuntu desktop that is preinstalled on the device.
At fresh start the camera sensor initializes with default image parameters (e.g., gain, exposure, etc.). Most of the times the default parameters will not be suitable for the setting that you want to observe. One solution is to set up a camera with Basler Pylon Viewer visual tool, and save the camera sensor parameters for further use. Pylon Viewer comes preinstalled on ICAM-540.
First, launch the pylon Viewer tool from Basler:
Turn on the camera in the GUI application by flipping the trigger and starting a continuous stream.
Now, adjust the camera sensor configurations to ensure the images coming from the sensor are of desired quality and lighting.
If you don't know where to start, the initial suggestions are to set Exposure Auto to Once and Gain Auto to Once. This way the sensor will adjust to the current frame conditions. Setting these to Continuous will make the sensor adjust these parameters dynamically as the frame changes.
After you are satisfied with the configuration it needs to be saved in the filesystem in .pfs
format for further reuse.
To do that:
Pause the stream by clicking on the "stop" icon
Open "Camera" menu on the top menu and click "Save Features"
Save the file in a filesystem path. It is recommended to create a directory for these configurations, e.g., /home/icam-540/basler-configs
Refer to Basler pylon Viewer documentation for more settings and usage tips
To set this device up in Edge Impulse, run the following command (from any folder). When prompted, enter the password you created for the user on your ICAM-540 in step 1. The entire script takes a few minutes to run (using a fast microSD card).
With camera settings configured and assuming they are saved in e.g., /home/icam-540/basler-configs/config-1.pfs
, run the following command:
This will start a wizard which will ask you to log in, and choose an Edge Impulse project. In the Data Acquisition tab of Edge Impulse Studio you may take images directly from the camera with those settings for use in developing your machine learning dataset.
With everything set up you can now build your first machine learning model with these tutorials:
Looking to connect different sensors? Our Linux SDK lets you easily send data from any sensor and any programming language (with examples in Node.js, Python, Go and C++) into Edge Impulse.
To run your impulse locally stop any previous Edge Impulse commands (CTRL+C) and run the following with the camera configurations you prefer (see above for info on camera configuration).
This will automatically compile your model with GPU and hardware acceleration, download the model to your device, and then start the inference, capturing the input with previously configured camera parameters. Our Linux SDK has examples on how to integrate the model with your favourite programming language.
Alternatively, you may download your model from the Deployment section of Edge Impulse Studio. Be sure to chose the Advantech ICAM-540 option to get the best acceleration possible.
Copy the downloaded .eim
file to the device's file system and run this command on the device
If you have an image model then you can get a peek of what your device sees by being on the same network as your device, and finding the 'Want to see a feed of the camera and live classification in your browser' message in the console. Open the URL in a browser and both the camera feed and the classification are shown:
Using make -j without specifying job limits can overtax system resources, causing "OOM killed" errors, especially on resource-constrained devices this has been observed on many of our supported Linux based SBCs.
Avoid using make -j without limits. If you experience OOM errors, limit concurrent jobs. A safe practice is:
This sets the number of jobs to your machine's available cores, balancing performance and system load.
This is probably caused by a missing dependency on libjpeg. If you run:
The end of the output should show support for file import/export with libjpeg, like so:
If you don't see jpeg support as "yes", rerun the setup script and take note of any errors.
If you encounter this error, ensure that your entire home directory is owned by you (especially the .config folder):
By default, the Jetson Orin enabled devices use a number of aggressive power saving features to disable and slow down hardware that is detected to be not in use. Experience indicates that sometimes the GPU cannot power up fast enough, nor stay on long enough, to enjoy best performance. You can adjust your power settings in the menu bar of the Ubuntu desktop.
Additionally, due to NVIDIA GPU internal architecture, running small models on it is less efficient than running larger models. E.g. the continuous gesture recognition model runs faster on NVIDIA CPU than on GPU with TensorRT acceleration.
According to our benchmarks, running vision models and larger keyword spotting models on GPU will result in faster inference, while smaller keyword spotting models and gesture recognition models (that also includes simple fully connected NN, that can be used for analyzing other time-series data) will perform better on CPU.
The Advantech MIC AI Series are a collection of versatile, fanless Edge AI Boxes with integrated NVIDIA® Jetson™ by Advantech.
Comprehensive product portfolio comprised of NVIDIA Jetson platforms
Flexible I/O and iDoor enable customers to adapt to different applications
Industrial design for wide temperature and vibration tolerance
BSP support
Supports remote management for large scale deployment
Thanks to work done by Edge Impulse partner Scailable, the Advantech MIC AI Series is seamlessly integrated for vision-based model deployments from the Edge Impulse Studio via the Scailable Cloud Platform. The Scailable AI Manager can be installed on any Advantech NVIDIA device using Allxon.
For detailed instructions on setting up your device with the Scailable AI Manager, see this tutorial:
For a end-to-end guide on integrating your Edge Impulse model to the Scailable Cloud Platform, and deploying your Edge Impulse model to your product device, see these tutorials:
The Advantech ICAM-500 series is a highly integrated Industrial AI Camera that reduces installation and maintenance effort significantly, equipped with programmable variable focus lenses, LED illumination, SONY industrial grade image sensor, multiple core ARM processors, and NVIDIA AI system on module.
Thanks to work done by Edge Impulse partner Scailable, the Advantech ICAM-500 is seamlessly integrated for vision-based model deployments from the Edge Impulse Studio via the Scailable Cloud Platform. The Scailable AI Manager can be installed on any Advantech NVIDIA device using Allxon.
For detailed instructions on setting up your device with the Scailable AI Manager, see these tutorials:
For a end-to-end guide on integrating your Edge Impulse model to the Scailable Cloud Platform, and deploying your Edge Impulse model to your product device, see these tutorials:
Seeed SenseCAP A1101 - LoraWAN Vision AI Sensor is an image recognition AI sensor designed for developers. SenseCAP A1101 - LoRaWAN Vision AI Sensor combines TinyML AI technology and LoRaWAN long-range transmission to enable a low-power, high-performance AI device solution for both indoor and outdoor use.
This sensor features Himax high-performance, low-power AI vision solution which supports the Google TensorFlow Lite framework and multiple TinyML AI platforms.
It is fully supported by Edge Impulse which means you will be able to sample raw data from the camera, build models, and deploy trained machine learning models to the module directly from the studio without any programming required. SenseCAP - Vision AI Module is available for purchase directly from Seeed Studio Bazaar.
To set A1101 up in Edge Impulse, you will need to install the following software:
On Linux:
GNU Screen: install for example via sudo apt install screen.
Download the latest Bouffalo Lab Dev Cube-All-Platform
Problems installing the Edge Impulse CLI?
See the Installation and troubleshooting guide.
With all the software in place, it's time to connect the A1101 to Edge Impulse.
BL702 is the USB-UART chip which enables the communication between the PC and the Himax chip. You need to update this firmware in order for the Edge Impulse firmware to work properly.
Get the latest bootloader firmware (tinyuf2-sensecap_vision_ai_X.X.X.bin.)
Connect the A1101 to the PC via a USB Type-C cable while holding down the Boot button on the A1101.
Open previously installed Bouffalo Lab Dev Cube software, select BL702/704/706, and then click Finish
Go to the MCU tab. Under Image file, click Browse and select the firmware you just downloaded.
Click Refresh, choose the Port related to the connected A1101, set Chip Erase to True, click Open UART, click Create & Download and wait for the process to be completed .
You will see the output as All Success if it went well.
If the flashing throws an error, click Create & Download multiple times until you see the All Success message.
A1101 does not come with the right Edge Impulse firmware yet. To update the firmware:
Download the latest Edge Impulse firmware and extract it to obtain firmware.uf2 file
Connect the A1101 again to the PC via USB Type-C cable and double-click the Boot button on the A1101 to enter mass storage mode
After this you will see a new storage drive shown on your file explorer as SENSECAP. Drag and drop the firmware.uf2 file to SENSECAP drive
Once the copying is finished SENSECAP drive will disappear. This is how we can check whether the copying is successful or not.
From a command prompt or terminal, run:
This will start a wizard which will ask you to log in, and choose an Edge Impulse project. If you want to switch projects run the command with --clean
.
Alternatively, recent versions of Google Chrome and Microsoft Edge can collect data directly from your A1101, without the need for the Edge Impulse CLI. See this blog post for more information.
That's all! Your device is now connected to Edge Impulse. To verify this, go to your Edge Impulse project, and click Devices. The device will be listed here.
Device connected to Edge Impulse correctly!
With everything set up, you can now build and run your first machine learning model with these tutorials:
Looking to connect different sensors? The Data forwarder lets you easily send data from any sensor into Edge Impulse.
Frames from the onboard camera can be directly captured from the studio:
Finally, once a model is trained, it can be easily deployed to the A1101 – Vision AI Module to start inferencing!
After building the machine learning model and downloading the Edge Impulse firmware from Edge Impulse Studio, deploy the model uf2 to SenseCAP - Vision AI by following steps 1 and 2 under Update Edge Impulse firmware
Drag and drop the firmware.uf2 file from EDGE IMPULSE to SENSECAP drive.
When you run this on your local interface:
it will ask you to click a URL, then you will see a live preview of the camera on your device.
If you want to compile the Edge Impulse firmware from the source code, you can visit this GitHub repo and follow the instructions included in the README.
The model used for the official firmware can be found in this public project.
In addition to connecting directly to a computer to view real-time detection data, you can also transmit these data through LoraWAN® and finally upload them to the SenseCAP cloud platform or a third-party cloud platform. On the SenseCAP cloud platform, you can view the data in a cycle and display it graphically through your mobile phone or computer. The SenseCAP cloud platform and SenseCAP Mate App use the same account system.
Since our focus here is on describing the model training process, we won't go into the details of the cloud platform data display. But if you're interested, you can always visit the SenseCAP cloud platform to try adding devices and viewing data. It's a great way to get a better understanding of the platform's capabilities!
You can get more information on how to use SenseCAP A1101 here
LoRaWAN® network coverage is required when using sensors, there are two options.
Seeed provides:
SenseCAP M2 for Helium network
SenseCAP M2 Multi-Platform for standard LoraWAN® network
If you have interests, please kindly click for more details.
Download SenseCAP Mate
Open SenseCAP Mate and login
Under Config screen, select Vision AI Sensor
Press and hold the configuration button on the SenseCap A1101 for 3 seconds to enter bluetooth pairing mode
Click Setup and it will start scanning for nearby SenseCAP A1101 devices- Go to Settings and make sure Object Detection and User Defined 1 is selected. If not, select it and click Send
Go to General and click Detect, you'll see the actual data here.
Click here to open a preview window of the camera stream
Click Connect button. Then you will see a pop up on the browser. Select SenseCAP Vision AI - Paired and click Connect
View real-time inference results using the preview window!
The cats are detected with bounding boxes around them. Here "0" corresponds to each detection of the same class. If you have multiple classes, they will be named as 0, 1, 2, 3, 4
and so on. Also the confidence score for each detected object (0.72 in above demo) is displayed!