Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
A TinyML model using Edge Impulse and the Wio Terminal with a thermal camera to predict faulty lithium ion cells in a BMS pack.
Created By: Manivannan Sivan
Public Project Link: https://studio.edgeimpulse.com/public/102553/latest
This prototype uses a Wio Terminal and Edge Impulse to predict overheated faulty cells in a BMS pack. For this project, I used an MLX 90640 Thermal Camera with the Wio Terminal to collect thermal data from a BMS pack.
A working demo of my prototype is available on YouTube here:
In an existing BMS pack, a temperature sensor is integrated with each cell pack, consisting of 14 cells, for identifying an overheated cell pack. But there is no system to identify an individual faulty cell that is overheating in a BMS pack.
Only one temperature sensor is deployed to detect the overall temperature of battery packs (14 * Li-ion Cells).
Identifying the individual cell temperature is challenging due to infrastructure cost for a BMS pack.
Cost for deploying Temperature sensor for each cell:
Number of cells in BMS pack: 112
Cost of Temperature sensor: 500 INR ($0.75 USD)
Total cost: 112 * 500 INR = 56,000 INR ($760 USD) *approx
Additionally, there is no interface support in a microcontroller to support 112 individual temperature sensor readings.
I have developed a prototype by using the MLX90640 thermal camera and Wio Terminal to collect the thermal data of the BMS pack and uploaded the data sets (Label: Faulty Battery 1.....6 and "Normal" to Edge Impulse).
In this prototype, 6 lithium-ion cells are connected to the load (Rheostat) and the MLX90640 and Wio Terminal are attached to the stand where the MLX90640 thermal camera is facing downwards over the lithium-ion cells.
The MLX90640 sends 32x24 thermal data to the Wio Terminal through I2C. Since this project focuses on identifying an overheated cell in the pack, I have used simple filtering logic to filter out the normal cell temperature by setting it to zero.
Upload the datasets created for this project from the below link.
Go to Edge Impulse -> Data acquisition and then the Uploader option to upload the datasets.
If you want to develop new datasets from scratch, flash the below code to the Wio Terminal using the Arduino IDE. For that, you need to configure Wio Terminal setup in the Arduino IDE. Please follow this link to get setup: https://wiki.seeedstudio.com/Wio-Terminal-Getting-Started/
This code will print the thermal data in array format, later it can be converted to .csv format as mentioned in the above datasets. Ideally the .csv data looks like this:
Once the datasets are uploaded, then in the "Create impulse" section change the Window size to 768 ( 24*32 = 768 ).
Next, in Feature Explorer, we can see the generated raw features of thermal data.
I have used reshape to change the 1D data to 2D data with 24 columns (due to placement of the thermal camera) , in some cases it might be 32 to get the best accuracy.
Then I have included couple of 2D conversion layers with pool layers, followed by a Flatten layer. Then 2 DNN (30 neurons , 10 neurons) in sequential is used.
In the Deployment section , select Arduino code and download the firmware package.
Then add the Zip file as a Library in Arduino IDE.
Once it is added, download the final application code from this GitHub link, and flash it to the Wio Terminal.
In a model training, 100% accuracy is achieved, and in model testing 87.5% accuracy is achieved.
In normal case, when all the battery in the pack is operating in normal temperature.
In a faulty battery condition, the model will predict the cell location index and display it with a predicted value. In this particular setup, a faulty cell is placed in location 5 and discharged for 1 hour. The cell gets overheated, and the model predicts the overheated cell location, number 5 in this battery pack.
If you cannot create a faulty cell for testing, you can simulate it using this method. Place a heated soldering iron on top of (near, but do not touch!) a battery cell, or move the soldering iron from across the battery pack from cell 1 to cell 6 in the pack. The model will predict the overheated cell locations as 1 to 6, as the soldering iron moves from 1 to 6. By adding the heat from the soldering iron, you can simulate the faulty battery condition and test it.
This project demonstrated a cheap and effective way to use computer vision and thermal imaging using the Wio Terminal, to identify lithium ion battery cells that are overheating, in more granular fashion than would be normally possible. This is a prototype of course, but could be used in robotics, automated warehouse and forklift devices, electric vehicles, or other places where batteries are arranged into packs.
Can a tiny Arduino device placed inside a power outlet anticipate outages using Machine Learning?
Created By: Roni Bandini
Public Project Link:
With power outages normally occurring in Argentina and other regions of Latin America, we ask the question, “Can a tiny Arduino device placed inside a power outlet anticipate outages using Machine Learning?” On one hand, the Argentinian government says power outages occur due to a lack of private infrastructure investment. On the other, electricity distributors in the area argue that non-discriminated subsidized rates and inadequate regulations are the cause of the power outages. In one case or the other, private companies' production and equipment, which are not easy to replace or import in Argentina, suffer.
Can we apply Machine Learning to power outages?
This question was the starting point for this project named EdenOff, named after Edenor; one of two power distributors in Buenos Aires.
What variables could be forwarded to a Machine Learning model to anticipate a power cut? Our answer: temperature. From December through February, there are challenging months in Argentina with peaks of temperature reaching 104 degrees Fahrenheit / 40 degrees celsius resulting in intensive use of air conditioning (AC). Below, Figure 3 depicts the average temperatures in Argentina in these months.
Another relevant variable is AC reading. AC outlets should be stable at 220 Volts and significant variations usually means trouble.
One of many interesting things about Machine Learning is that we don’t have to determine rules in advance like in standard algorithm based code:
We can just use historical data to train a model and make inferences. That model could be placed into a cheap single-board microcontroller without even an internet connection.
For machine learning (ML) we will use Edge Impulse; a free platform for developers which provides powerful and interesting features that speed up any machine learning project.
Since this prototype does not use Edge Impulse supported sensors, data acquisition will be made separately.
How can such data be obtained? Data is obtained by power distributors, from auditors, or even from private custom records such as tracking AC variation and temperature to a database.
The model will be trained with a failure and a regular database with the following labeled columns: timestamp, temperature, voltage, as well as a column with the latest five average AC readings. The purpose of the AC reading column is to detect any recent variations in the service. Figure 5 displays what our failure dataset case looks like.
After a trial and error procedure, Keras (a deep learning API) with 75 training cycles and Neural Network with a 0.0005 learning rate, turned out to be effective. Edge Impulse visual and testing tools were an excellent way to determine whether datasets and training are correct, before starting to code.
Now that we are sure predictions work, we can to export an Arduino-ready library and create the electronics.
Arduino Nano BLE 33: powerful board compatible with Edge Impulse TinyML
Zmpt101b voltage sensor: to read AC from the outlet
7 Segment 4 digit display
Digital Buzzer
Female to female jumper wires
3.7v battery
Lipo Charger
Note 1: to simplify the project onboard a HTS221 temperature sensor is used, but for a real case scenario an external temperature sensor should be used.
Note 2: In previous projects, I was asked about using standard Arduino Nano with external sensors. The most important thing about Arduino Nano BLE 33 Sense is not it’s sensors but the processor. You will not be able to replace it with a regular Arduino Nano. If you cannot get BLE 33 Sense, check out the Arduino Portenta H7.
Note 3: Zmpt101b voltage sensor requires a setup calibration. You can find a small screw in the board to do that. You may also need to adjust the following function inside the .ino code to obtain reliable voltage readings.
Zmpt101b GND and 5V to Arduino GND and 5V. Then signal pin to Arduino A0. AC to screw pins.
Display to GND and 5V, D12 and D11
Buzzer to GND and D10.
Battery to VIN and GND
Install HTS221 library. Even when this is an on board module, the HTS221 library is required. Go to Sketch > Include Library > Manage Libraries > Search HTS221
Download this ZIP file > Add via Sketch > Add Zip.
Download the .ino file > load it into Arduino BLE 33 > connect the Arduino using micro USB cable, and upload
Regarding code settings:
Threshold is used to compare against result.classification[ix].value for failure dataset. See below:
testFail is used to force a fail message and buzzer for testing purposes on iterations #1, #3, and #5. See below:
If you want to make an average of more than 5 readings, you will have to change the formula. See below:
This model was made considering Argentina's power outlet specifications but you can make your own and place the entire prototype inside a power outlet if needed.
Final notes:
Instead of Arduino BLE 33 Sense, Raspberry Pi 4 could be used, temperature could be obtained from the internet along with power distributor's demand data. Several actions could be triggered when a power cut is coming like starting a gas-based generator, alerting employees by Telegram, turning off expensive machines with Linux commands or relays, etc.
This machine learning project covers some points that could be used for other enterprise-related products like third-party sensors, importing CSV datasets, and custom axis for inferences.
Running anomaly detection on a Nordic Thingy:91 for predictive maintenance of machinery.
Created By:
Public Project Link:
Untimely critical machinery failure is one of the biggest troubles a plant manager must deal with when running a production facility. Because heavy machinery parts are expensive and lead times for replacement are getting longer and longer due to the supply crisis, employing preventive measures like equipping machinery with a predictive maintenance solution greatly improves the Overall Equipment Effectiveness (OEE).
Such a solution measures key health indicators of machinery like vibration, temperature, and noise, analyzes them using AI algorithms and sends alerts way before machinery breaks down, allowing the facility to reduce operating costs and increase production capacity.
To show you a real world use-case of predictive maintenance we have decided to use the Nordic Thingy:91, an easy-to-use prototyping dev kit for IoT projects, packed with a multitude of sensors relevant for our application: Low-power accelerometer, temperature, and pressure sensors.
The on-board nRF9160 System-in-Package (SiP) supports LTE-M, NB-IoT and GNSS if you wish to send the data in the cloud and the nRF52840 allows the development of Bluetooth LE applications.
The 64 MHz Arm® Cortex®-M33 CPU is great for running a TinyML model on the edge used to detect anomalies while the machinery is running.
Our approach to building a predictive maintenance solution based on the Nordic Semi Thingy:91 is to attach it mechanically to a machine and collect accelerometer data during normal functioning. After a proper data set is acquired, we will train a TinyML model based on an Anomaly Detection Neural Network using Edge Impulse that will detect anomalies.
Nordic Semi Thingy:91
Micro-USB cable for Thingy:91
J-link debugging probe
Edge Impulse account
nRF Connect for Desktop v3.7.1
A working Zephyr RTOS build environment achieved by installing nRF Connect SDK
GNU ARM Embedded Toolchain (version 9-2019-q4-major)
The Thingy:91 comes equipped with all the required sensors for this use-case so there is not much wiring to do. Plugging a micro-USB cable in the prototyping board is enough to do the data acquisition and to deploy the model back on the edge. If you wish to run it completely wireless, the 1359 mAh Li-Po battery is big enough to run the inference on the target for a while, varying based on the sensor reading frequency and the communication protocol used.
Our aim is to detect faulty operation or an approaching critical machinery failure in an extruding-based machine. For this, we have attached the Nordic Semi Thingy:91 to a 3D printer in order to better schedule our maintenance operations like unclogging the nozzle, oiling the linear bearings, dusting the fan etc. Using the same principles the use case can be adapted to other much larger extruders or machines that involve any type of motors that are vibrating when functioning.
First thing first, to collect our dataset, we must upload the Thingy:91 firmware provided by Edge Impulse on the dev kit.
Turn on the Thingy:91 while pressing on the multi-function button placed in the middle of the board. Release the button, connect it to the PC, launch nRF Connect for Desktop and open the Programmer.
Click on Select Device, select Thingy:91 and once returned to the programmer screen, make sure that Enable MCUboot is checked.
In the Programmer navigation bar, click Select device.
In the menu on the right, click Add HEX file > Browse, and select the firmware.hex file from the firmware previously downloaded at step 3.
Scroll down in the menu on the right to Device and click Write:
Right now, we have everything we need to connect the dev kit to our Edge Impulse project, collect the data and train the model. Next up, we must install all the prerequisites necessary for the Deployment Phase of this project. Take note that these are necessary only if you wish to build your own custom application.
Download and extract the GNU ARM Embedded Toolchain (version 9-2019-q4-major) and extract it in /home/USER/gnuarmemb
The first step towards building your TinyML Model is creating a new Edge Impulse Project.
Once logged in to your Edge Impulse account, you will be greeted by the Project Creation screen.
Click on Create new project, give it a meaningful name, select Developer as your desired project type and press Create new project.
Afterward, select Accelerometer data as the type of data you wish to use.
With the project created, it’s time to connect a device to it. Power up the Thingy:91 and connect it via a USB cable to the PC. Open up a terminal and run:
You will be prompted with a message to insert your username and password and then you will be asked to select which device you would like to connect to.
You may notice that the Thingy:91 exposes multiple UARTs. Select the first one and press ENTER.
Next up, select the project you wish to connect the device to, press Enter and give it a recognisable name.
If you head back to Edge Impulse Studio, you will notice that the device shows up in the Devices Tab.
When monitoring an industrial system, purposefully running it in a faulty manner to collect data dedicated to training a model for failure detection is not possible since breaking would be out of the question. Instead, our approach is collecting data in a time where the machine operates nominally / is idling / is powered off and create an anomaly detection algorithm that will detect when something is out of order.
With the device connected, head over to the Data acquisition tab. Before acquiring data we must set a Sample Length and a Reading Frequency.
When sensors or other devices take measurements of some physical quantity, the process of converting this analogue signal into a digital representation is known as sampling. In order for the resulting digital signal to be an accurate representation of the original, it is important to respect the Nyquist–Shannon sampling theorem when carrying out this process. The Nyquist frequency is twice the highest frequency present in the signal being sampled, and Nyquist's theorem states that if the sampling frequency is not equal to or higher than the Nyquist frequency, then aliasing will occur. This means that high-frequency components in the signal will be misrepresented in the digital version, leading to errors in the measurements.This being said, we will pick the highest frequency available, to avoid the aliasing phenomenon.
When building the dataset, keep in mind that machine learning leverages data, so when creating a new class, try to record at least 3 minutes of data.
Also, remember to gather some samples for the testing data set, as to achieve a distribution of at least 85-15% between training and testing set sizes.
Once the data acquisition phase is over, the next step is designing an Impulse. What an Impulse does is take raw data from your dataset, split it up in manageable bites, called “windows”, extract features using signal processing blocks and then, classify new data by employing the learning block.
For this example, we will make use of the Spectral analysis signal processing block and the Classification and Anomaly Detection learning blocks.
Once the setup is done, clock on Save impulse and move over to the Spectral features tab that appeared under the Impulse Design menu. In this screen you can observe the raw data being displayed on the top side of the window and the results of the signal processing block on the right side.
Digital signal processing theory is convoluted at times so we are not going to dwell too deep in this subject. Tweak the parameters with the target of obtaining similar results from similar data.
In our case, we have noticed huge improvements in the mode’s accuracy when switching from Low-pass filter to a High-pass filter and increasing the Scale axis factor to 30.
Once done configuring the DSP block, move forward to the Feature generation screen. Make sure that Calculate feature importance is checked and click on Generate Features.
The Feature explorer is one of the most powerful tools put at your disposal by Edge Impulse. It allows intuitive data exploration in a visual manner. It allows you to quickly validate whether your data separates nicely before moving over to training the model. It color-codes similar data and allows you to trace it back to the sample it came from by just clicking on the data item, making it a great perk if you are trying to find the outsiders in your dataset.
When you are working on a classification-based application, what you aim to see in the Feature explorer is clearly defined clusters of data points. In our use-case, this is not the case, and the small overlap of the data clusters does not inconvenience us as we are trying to detect when the system is running outside of those nominal parameters.
Once we are happy with the collected data, we will be moving forward to training a neural network.
Neural networks are computer algorithms that are designed to recognize patterns in large amounts of raw data. Similar in many ways to the human brain, a neural network is made up of interconnected layers of highly specialized neurons. Each neuron examines a particular aspect of the raw data, such as specific frequency patterns, and then passes this information on to the next layer through weighted connections. This process allows the network to learn how to identify different types of patterns over time, adjusting its weights accordingly based on what it has learned from past experience. Thus, neural networks have the ability to accurately recognize complex and nuanced patterns in virtually any type of data.
In the NN Classifier tab, under the Impulse Design menu, leave the parameters on the default settings and click on the Start Training button and wait for the NN to be trained. Once done, you will be presented with some training performance indices like the Accuracy and Loss. In a classification-based project we would be aiming for at least 95% Accuracy but in our case, it is not required.
The Anomaly detector is a secondary Neural Network that we will employ to differentiate when data does not fit in any of the categories we have defined in the previous step.
When we were designing the impulse for this use-case, a very important step was to check the Generate Feature Importance before clicking on Generate Features. What this does is determine what are the most relevant features in the collected data, as to increase the “resolution” of our model and to reduce the amount of processing power needed.
As you can see, the predominant features in our dataset are the accY RMS and accZ RMS.
Click on the Anomaly detection under the Impulse Design menu. Click on Select suggested axes, leave the number of clusters set on 32, and click on Start Training. Once the training is done, you will be prompted with the training results. You can observe that the Anomaly Explorer plots the 2 most important features against each other, and defines areas around the collected data. When new data is gathered, it is placed on the same coordinate system and if it is situated around the defined clusters, it is flagged as an anomaly.
Even though we said earlier in this guide that purposefully running the machinery in a faulty manner is out of discussion, we have induced a small clog for 10 seconds in our machine to gather authentic data.
To test out the model, head over to the Live Classification tab and press the Start Sampling button.
Under the Summary tab you can see the number of samples that were placed in each category, and in the right side of the screen, you can see the Raw Data, Spectral Features and the Anomaly Explorer. Head over to the Anomaly detection under the Impulse Design menu and load your newly gathered sample in the Anomaly Explorer to analyze it even further.
There are 2 ways going about running the Impulse we have just designed on the edge: Either deploying a pre-built binary or exporting the Impulse as a C++ library and building the binary locally. Let’s explore both in our use case and see the benefits for each:
Deploying the newly created model on the Nordic Thingy:91 implies running it without an internet connection, optimizing the power consumption of the device and minimizing latency between measurements and analyzing them.
Because the Thingy:91 board is fully supported by Edge Impulse, you can navigate to the Deployment tab, select the board and download a ready-to-go binary for it that includes the Impulse we have just built.
Deploying the model in this manner is a great way of evaluating the on-board performance of the Impulse with the smallest time investment possible. It allows you to go back and tweak the model until it reaches the desired performance for your application.
Follow the same steps you did when uploading the custom Edge Impulse firmware on the board, only this time upload the downloaded binary file.
Restart the board, connect it to your PC, launch a terminal and run:
The Thingy:91 will start reading accelerometer data, run it through the previously configured DSP block and then classify it.
Notice that we are running the edge in a 2 seconds interval. If you wish to change this parameter, navigate to the Impulse Design tab, select the desired window size and re-train your model.
When you are done testing the model and you are happy with the results you can use this method to fully integrate with other code required to make your device functioning fully stand-alone at the edge (this could include direct control of other devices, triggering alarms, logging data or sending it remotely if needed based on your demands). Choosing this method of deploying, what you get is a library that contains all the signal processing blocks, learning blocks, configurations and SDK needed to integrate the ML model in your own custom application.
If you are curious, our main.cpp file looks like this:
While reactive and preventive maintenance require constant effort from the support team, predictive maintenance seems to be, in our opinion, a very good choice not only to minimize their presence on the factory floor but also help factories reduce their inventory costs by identifying spare parts that are likely to be needed in the future. As a result, predictive maintenance is a useful tool for factories that want to minimize disruptions and improve their bottom line.
The Nordic Thingy:91 is a very good development kit for rapid prototyping offering a good number of sensors and several connectivity options making it a good candidate for many use cases both industrial or even home automation related. It's also a good choice if you are not too hardware savvy or lack the tools to assemble/test electronic modules. The recipe presented above can be quickly modified and customized to enable the monitoring of other various environmental properties that you want to keep an eye on.
Install the latest version of from the official source on your OS of choice.
Install . This is a suite of tools that is used to control local devices, act as a proxy to synchronize data for devices that are not connected to the internet and to facilitate uploading and converting local files.
Afterwards, and extract the archive somewhere convenient.
Install the . Follow the steps in the and instead of installing a build IDE, set up the command-line build environment.
Install the and that will enable us to flash the board using the west command line interface.
You can find a great guide about how you can in the official Edge Impulse Documentation.
If you need assistance in deploying your own solutions or more information about the tutorial above please !
With predictive maintenance, you can monitor your equipment while it’s running: This means that there is less downtime for inspections and repair jobs because the monitoring process takes place during operation instead of waiting until something breaks or wears out.
The Edge Impulse platform and solutions engineering team enables companies to make more accurate predictions about when devices might fail, which lets them optimize their fleet maintenance and use service crews most effectively. This saves the companies money by letting them lower overall asset downtime and allows customers to be more satisfied with their product and services.
In this article, we will explain some of the beneficial applications of predictive maintenance, and then show how to build a predictive maintenance solution that will detect abnormal vibrations using Edge Impulse’s platform, the BrainChip Akida hardware, and a computer cooling fan.
Business Case Examples for Edge Predictive Maintenance
Predictive maintenance provides a wide variety of business benefits, such as:
Predicting asset depreciation and maintenance timelines The security and building-access industry have been experiencing increasing pressure due to the global pandemic, and it’s imperative for customers to understand when a security door or component might fail. By anticipating maintenance, companies can reduce unplanned out-of-service intervals, allowing for minimal disruption in office buildings where there is huge traffic of people.
Lowering cost and gaining more ROI Global shipping companies are looking for ways to lower their costs and increase efficiency. Focusing on predictive maintenance can allow them to proactively address any issues before they become costly or cause unsafe conditions in order to avoid downtime on ships.
Data complexity: If you’ve got a factory or manufacturing floor with hundreds of cameras and sensors in it then there’s just no way to send that information across the Internet to the cloud for processing — it’s going to overwhelm whatever kind of connection you have.
Latency: This is the time it takes for something to happen after a key event happened. It’s important in industrial and manufacturing because if there are sudden changes, such as a potential machine malfunction — then those cloud-based compute devices won’t be able to make decisions or predictions quick enough. Cloud processing is simply too slow. Predictive models running on the edge is the way to go.
Cost: The economics of cloud computing are getting better and cheaper all the time, but it still costs money. Edge Computing can reduce data consumption by sending less information to a server in a remote location, which saves energy as well as provides faster network speeds for users on competitive websites who do not have this advantage over them yet.
Reliability: The local processing of an asset-monitoring system means that it will be able to work even when connectivity goes down. Edge machine learning is great for both on- and off-grid industrial assets.
Privacy: With edge compute, sensitive live operational sensor data does not need to leave the facility or be shared with third parties.
Let’s look at how to assemble a solution that detects anomalous hardware vibrations.
Akida Development Kit Raspberry Pi, keyboard, mouse, monitor
Standoffs and screws — used are a #2-52 screw/nut to secure to fan
First, connect the accelerometer to the Raspberry Pi header like so:
Please follow these instructions for setup and creation of your Edge Impulse account. Once you have an empty project created you can set up your AkidaTM Development Kit and collect your accelerometer data. You will design the Impulse later in this guide.
To start setting up the device for a custom model deployment, let's verify you have installed all the packages you need. Ensure the development kit is powered on and connected to the network. Setup Visual Studio Code for remote debugging and open a terminal in VSCode once connected. Run these commands to install the needed components.
You will also need Node Js v14.x to be able to use theEdge Impulse CLI. Install it by running these commands:
The last command should return the node version, v14 or above.
Finally, let's install theLinux Python SDK, you just need to run these commands:
After getting the Akida Development Kit configured and having the accelerometer connected you will need to collect data from the accelerometer/fan setup. Since you are using custom devices we have developed code that you can use immediately.
You can download with git using:
Inside the directory you will find accel-hw-timed-fixed-dt.py. This file has the needed components to collect accelerometer data. Here is a flow chart of how it runs:
To run use this command
This will start collecting data in the folder specified. For the project to have a good data we recommend taking at least 300 samples for each of the following conditions:
Fan off — label as “off”
Fan on — label as “on”
Fan on with finger lightly rubbing the spinning center hub of the motor — label this as “center”. This is to simulate one possible fault condition.
Fan on with finger light touching the spinning blade at the outermost edge — label this as “edge”. This simulates another possible fault condition.
To upload the data to Edge Impulse use the uploader tool installed with the Edge Impulse CLI.
You may be prompted for username and password for Edge Impulse. After successful connection you should select the empty project you had created earlier.
The Ubuntu operating system running on Raspberry Pi is not an RTOS and so it is impossible to get consistently spaced accelerometer samples. That is, the data acquisition is not hardware timed as the OS has to be interrupted to service the sampling and it can be delayed by other tasks. Therefore there is no guarantee that samples are acquired at a fixed delta-t.
In order to get good performance the code implements a hardware interrupt with the PWM and GPIO pins. Our testing showed that the maximum delay in servicing the interrupt went from approximately 8ms to 2ms. The codemake an assumption that the delta-t is fixed at the sample frequency and no variance is recorded.
Lastly, it is important to get the data out of the accelerometer as fast as possible so that it is ready for the next sample. In the code you ensure that the fastest speed is enabled. The code is set up to take 100 samples of data for 1 second and transfer off the device as quickly as possible.
Since you have not implemented custom C++ processing code for this custom processing block, you are not able to deploy an EIM compiled model from Edge Impulse Studio. That is, if you tried to build an EIM for “Linux (AARCH64 with AKD1000 MINI PCIe)” there would be a build error because of the lack of C++ code to run the DSP on the CPU.
You can work around this issue by creating two projects: One that does the custom feature generation (the customer DSP block) to which you select the FBZ file for Akida. Then, a second project that is used only for anomaly detection. This second project you will output the EIM for the anomaly detection to run on the CPU.
The Python code that runs on the Enablement device will tie all the pieces together.
Once you have collected the data it is time to design the rest of the Edge Impulse Studio project. This is what your Impulse design should eventually look like.
Since you have taken accelerometer data at 100Hz for 1-second record lengths it is important to use those values in the Timing series data block.
Akida dense network uses 4 bit uint8 inputs. This means that the range of input data allowed must be between 0 and 15. The default classification blocks with Edge Impulse output signed float data. Therefore you must use custom spectral features code that makes the training and test dataset correct for the 4-bit, uint8 dense layer classifier. The code for the custom processing block is found here. You will need to follow the instructions of using Custom Processing Blocks to add to your Edge Impulse Studio project.
Select the Classification - BrainChip Akida™ as the learn block and ensure that the Spectral Features input box is checked. Save the Impulse and proceed onto feature generation.
This process of generating features and determining the most important features of your data will further reduce the amount of signal analysis needed on the device with new and unseen data. An obstruction in the fan will create a much different waveform on all three accelerometer axes than a nominal sample; you can use the most important features to more quickly and accurately determine if a new incoming signal is an obstruction or a fan failure, etc.
When using the Akida blocks it is important to review the accuracy of the model. Akida heavily quantizes the model and without proper training (especially quantization ware training). The Akida Learning Blocks have this training code implemented and the defaults can work really well with this type of data.
To view the model accuracy and Akida specific metrics be sure to select the “Quantized (akida)” as the model version.
When this option is selected you will see the Confusion Matrix for the Validation Dataset and Akida Performance parameters.
From the first project (the classifier project) go into the Dashboard and select Export and follow the step to download the data. Once downloaded go back to the second project (the anomaly project) and in the Data Acquisition tab to upload the recently downloaded data.
With the data uploaded you will need to create a new Impulse as shown below.
Since you have taken accelerometer data at 100Hz for 1 second record lengths it is important to use those values in the Timing series data block.
The k-mean algorithm does not have the restriction of 4-bit, unsigned data and so does not require a custom block. Please select the default Spectral Analysis block.
Anomaly detection can be used to detect irregular patterns in the collected sensor data. In Edge Impulse you can implement anomaly detection using one of the available anomaly detection blocks. For this setup you will be using k-means as it is freely available to all Edge Impulse developers.
In the anomaly detection block, make sure to click the “Select Suggested Axes” to highlight the features of importance . Without selecting this button, the anomaly detection settings will default to your data’s Root-Mean-Square value (or RMS) for each of the axes. Prior to the release of the feature importance view in the DSP block, the anomaly detection block would prioritize the RMS values, and you would then have to make a decision by yourself if the RMS values were most meaningful for your anomaly detection use case. With feature importance, you take the guesswork out of this and get your model to production even faster!
You are using custom code for this project and you will need the Akida compatible model file stored in FBZ format. Proceed to the dashboard of the first project (the classifier project) and select the Classifier model - MetaTF file. Once the file is presented download to your machine and then drag and drop into the brainchip_acceleromenter folder in the open Visual Studio Code file viewer.
The anomaly scoring algorithm can be neatly packaged into an Edge Impulse .eim file. To do so go to the Deployment tab of the second Edge Impulse project (the one with the k-mean anomaly scoring) and select Linux (AARCH64) from the drop down menu and click Build. Once the file is presented, download it to your machine and then drag and drop into the brainchip_acceleromenter folder in the open Visual Studio Code file viewer.
Once all the files are in the correct directory you can run the inference demo with
Below is a flow chart of how the code works.
And the results of the inference will be displayed below. For example, here is where there the center hub is rubbing:
In this example we have shown how you can easily implement new solutions for BrainChip’s Akida enablement devices to test your predictive maintenance projects. Demonstrated are the abilities to use custom sensors with Akida and Edge Impulse, adjust Edge Impulse DSP blocks, train an Akida compatible model, and then download the trained model to run on your Akida device. The model is able to run fast, detect anomalies, and then be further customized with new data with easy upload back into Edge Impulse Studio for continuous improvement to the abilities of the model (new classes, higher accuracy etc).
To learn further about BrainChip devices please visit brainchip.com or reach out to Edge Impulse at edgeimpulse.com/contact.
Measure the flow rate through a pipe and use machine learning to determine if a clog has formed.
Created By: Shebin Jose Jacob
Public Project Link: https://studio.edgeimpulse.com/public/171104/latest
GitHub Repository:
https://github.com/CodersCafeTech/Clog-Detection-With-AI
Pipeline clogs can have serious and destructive effects on industrial operations. Clogs can occur for a variety of reasons, such as the build-up of debris, corrosion, and other types of damage. When a pipeline clogs, it can disrupt the flow of materials and lead to costly repairs, downtime, and other problems. In this essay, we will explore the destructive effects of clogs in industrial pipelines and discuss some ways to prevent and mitigate these issues.
One of the primary effects of pipeline clogs is reduced efficiency and productivity. When a pipeline is clogged, the flow of materials is disrupted, which can lead to delays and bottlenecks in the production process. This can result in missed deadlines, reduced output, and decreased profits. Additionally, clogs can cause equipment to wear out more quickly, which can result in higher maintenance and repair costs.
Another destructive effect of pipeline clogs is environmental damage. When a pipeline clogs, it can lead to spills and leaks, which can have serious consequences for the environment. For example, if a pipeline carrying hazardous materials clogs, the materials may leak out and contaminate the surrounding area. This can have serious impacts on wildlife, ecosystems, and human health.
In addition to these effects, pipeline clogs can also pose a safety risk to workers. If a clog occurs in a pipeline carrying high-pressure fluids or gases, it can lead to explosions or other hazards. This can put workers at risk of injury or death, as well as cause damage to equipment and facilities.
As a proposed solution to the problem of pipeline clogs in industrial operations, we are introducing the use of artificial intelligence (AI) and machine learning. Our AI system uses flow rate sensor data to detect clogs in pipelines by analyzing changes in flow rates that may indicate a blockage. This approach has the potential to prevent disruptions and costly repairs, as well as reduce the risk of environmental damage and safety incidents.
To implement this solution, flow rate sensors would be installed along the length of the pipeline. These sensors would continuously measure the flow rate of materials through the pipeline and transmit the data back to the AI system. The AI system would then use machine learning algorithms to analyze the data and detect any changes that may indicate a clog. If a clog is detected, the system could alert maintenance personnel, who can then take action to address the problem.
In this project, we utilized the Seeed Wio Terminal development board. This particular board was chosen for its comprehensive capabilities as a complete system, including a screen, a development board, an input/output interface, and an enclosure. Essentially, the Seeed Wio Terminal provides everything needed for a successful project in a single, integrated package. Additionally, this development board is known for its reliability and ease of use, making it an ideal choice for our project.
We used a DFRobot Water Flow sensor to detect the flow state.
The purpose of this sensor is to measure the flow rate of a liquid as it passes through it. It accomplishes this task through the use of a magnetic rotor and a hall effect sensor. When liquid flows through the sensor, the movement of the liquid causes the magnetic rotor to rotate. The speed at which the rotor turns is directly proportional to the flow rate of the liquid. The hall effect sensor, which is positioned near the rotor, detects this rotation and outputs a pulse width signal. This pulse width signal can then be used to calculate the flow rate of the liquid. In this way, the combination of the magnetic rotor and the hall effect sensor allows for precise measurement of the flow rate of a liquid.
The flow setup for this system is pretty simple. Essentially, it involves attaching two pipes to the inlet and outlet of the flow sensor. The inlet pipe is used to channel the liquid being measured into the flow sensor, while the outlet pipe serves to direct the liquid out of the flow sensor after it has been measured. This simple configuration allows for the accurate measurement of the flow rate of the liquid as it passes through the flow sensor.
We will collect the data:
When there is no flow
When there is a flow
When there is a clog
To prepare your Seeed Wio Terminal for use with Edge Impulse, you can follow the instructions provided in the official guide. However, we have chosen to employ an alternative method for collecting data in our project. Specifically, we are using CSV files to gather data, which we then upload to Edge Impulse. From there, we follow the usual process of generating a TinyML model using the data collected in this manner. Our method allows us to collect data in a flexible, portable format that can be easily transferred to Edge Impulse for further analysis and model creation.
For our project, we are utilizing a water flow sensor that produces a pulse width modulation (PWM) signal as its output. Rather than collecting the analog values directly from the sensor, we have chosen to calculate the flow rate using an equation based on the PWM signal. This flow rate data is then collected as time series data, allowing us to track changes in flow rate over time. We have also collected flow rate data for three different scenarios: no flow, normal flow, and a clog. Through our analysis, we have determined that these three scenarios produce distinguishable patterns in the flow rate data that can be detected by our model.
To collect data for your project, follow these steps:
Upload DataCollection.ino to Wio Terminal.
Plug your Wio Terminal into the computer.
Run SerialDataCollection.py in the computer.
Press button C to start recording.
When you have enough data, press button C again to stop recording.
Once you have stopped recording, it will generate a CSV file on your computer. Name it according to the flow state.
Upload the CSV file to EdgeImpulse using the Data Acquisition Tab.
After uploading the CSV files containing our flow rate data to Edge Impulse, we divided the entire dataset into smaller samples of 6 seconds in length. This process, known as splitting the data, allows us to analyze and manipulate the data for model creation.
By breaking the data down into smaller chunks, we can more easily identify trends and patterns that may be relevant to our model. Additionally, this approach allows us to efficiently use the data for training and testing, as we can more easily control the input and output for each sample. One of the collected samples for each class is visualized below.
After dividing our flow rate data into smaller samples as described above, we have further split the dataset into two distinct subsets: a training dataset and a testing dataset. This process is known as data partitioning, and it is an essential step in the model creation process.
By separating the data into these two subsets, we can use the training dataset to teach our model to recognize patterns and make predictions, while the testing dataset is used to evaluate the accuracy and effectiveness of the model. By using a clean, well-organized dataset, we can be confident that our model is learning from high-quality data and is more likely to produce accurate and reliable results.
An impulse is a specialized machine learning pipeline designed to extract useful information from raw data and use it to make predictions or classify new data. The process of creating an impulse typically involves three main stages: signal processing, feature extraction, and learning.
During the signal processing stage, the raw data is cleaned and organized in a format that is more suitable for analysis. This may involve removing noise or other extraneous information, and may also involve preprocessing the data in some way to make it more useful for the next stage of the process.
Next, the feature extraction stage involves identifying and extracting important characteristics or patterns from the processed data. These features are the key pieces of information that the learning block will use to classify or predict new data.
Finally, the learning block is responsible for categorizing or predicting new data based on the features extracted in the previous stage. This may involve training a machine learning model on the extracted features, or it may involve applying some other type of classification or prediction algorithm.
In this project, we are utilizing machine learning to classify the flow rate of a liquid into one of three distinct classes. To do this, we are using Time Series Data as the input block for the impulse. This type of data consists of a series of measurements taken at regular intervals over a period of time, and it is well-suited for analyzing trends and patterns in flow rate data.
For the processing block, we are using Raw Data, which is the unprocessed data collected directly from the flow sensor. This data is then passed through the processing block, where it is cleaned and organized in a way that is more suitable for analysis.
Finally, for the learning block, we are using a Classifier block. This type of algorithm is designed to assign data to one of several predefined categories, and it is well-suited for the task of classifying flow rate data into one of three categories. By using classification as the learning block, we can categorize the flow rate data into one of three classes: no flow, normal flow, or a clog.
At this point in the process, we are ready to move to the Raw Data tab and begin generating features. The Raw Data tab provides a number of options for manipulating the data, such as changing the scale of the axis or applying various filters. In our case, we have chosen to keep the default settings and proceed directly to generating features.
To generate features, we will apply a variety of algorithms and techniques to identify important patterns and characteristics in the data. These features will be used by the learning block of our impulse to classify the flow rate data into one of three categories. By carefully selecting and extracting relevant features, we can create a more accurate and reliable model for classifying flow rate data.
After analyzing the features, we have determined that they are well separated and there is no overlap between the classes. This is an encouraging sign, as it suggests that we have a high-quality dataset that is well-suited for model generation.
Now that we have extracted and prepared our features, we are ready to move on to the Classifier tab to train our model. The Classifier tab provides several options for modifying the behavior of our model, including the number of neurons in the hidden layer, the learning rate, and the number of epochs.
Through a process of trial and error, we experimented with different combinations of parameters until we were able to achieve a training accuracy that met our standards. This process involved adjusting the number of neurons in the hidden layer, the learning rate, and the number of epochs, among other things. Ultimately, we were able to find a set of parameters that resulted in a model with desired training accuracy which is shown in the figure.
After training the model for a total of 70 cycles with a learning rate of 0.002, we were able to produce an output model with 100% training accuracy and a loss of 0.03.
This level of accuracy is extremely high, indicating that our model is capable of accurately classifying flow rate data into one of three categories. Additionally, the low loss value suggests that our model is able to make predictions with a high degree of confidence, further increasing the reliability of our results.
Having trained and fine-tuned our model to achieve a high level of accuracy, we are now ready to test its performance on some previously unseen data. To do this, we will navigate to the Model Testing tab and use the Classify All feature to evaluate the model's performance.
By applying the model to a new set of data, we can determine whether it is capable of accurately predicting flow rate patterns and classifying the data into one of three categories. If the model performs well on this test data, we can be confident that it will be able to provide useful and reliable insights when applied to real-world situations.
Upon running the test, we were pleased to see that the model performed exceptionally well, accurately classifying the flow rate data into one of three categories with a high degree of accuracy. These results are a strong indication that our model is well-functioning and capable of providing valuable insights for industrial pipeline management.
Now that we have created and tested a well-functioning model for predicting and classifying flow rate patterns in industrial pipelines, we are ready to deploy it as an Arduino Library.
To do this, we will navigate to the Deployment tab and follow the instructions provided there to build an Arduino Library for our model.
During the process of building the library, we have the option of enabling optimizations with the EON Compiler. This feature allows us to further improve the performance of our model by optimizing the code for efficient execution on the device. While this is an optional step, it can be useful for increasing the speed and efficiency of our model, particularly if we plan to use it in resource-constrained environments.
After completing the process of building an Arduino library for our model, we will be presented with a .zip file containing the model itself, as well as a number of examples demonstrating how to use the model in various contexts. To add the library to the Arduino Integrated Development Environment (IDE), we can simply navigate to Sketch > Include Library > Add .ZIP Library in the IDE, and select the .zip file produced by the build process. This will install the library and make it available for use in our Arduino projects.
We need to modify the static_buffer.ino
file located in the Arduino Integrated Development Environment (IDE) for the purpose of enabling dynamic inferencing. We can begin by opening the Arduino IDE and navigating to File > Examples > Your Project Name > static_buffer > static_buffer.ino.
This will open the static_buffer.ino
file in the editor window, allowing us to make changes to the code as needed.
Dynamic inferencing involves making predictions or classifications in real-time as new data is received, so we will need to modify the code to allow for real-time data processing and prediction. This may involve adding code to handle incoming data streams, applying machine learning algorithms to the data, and making predictions or classifications based on the results. We may also need to make other modifications to the code to support dynamic inferencing, depending on the specific requirements of our application. Once we have made the necessary changes to the code, we can save the modified file and use it to perform inferencing with our model, providing valuable insights for industrial pipeline management. The code for our project is available in the below GitHub repository.
All of the assets for this project, including the code, documentation, and any other relevant files, are available in this github repository.
This project uses a flowmeter to measure the rate of flow of a liquid through a pipe, then predicts if a clog is detected using a machine learning algorithm that has been deployed on a Seeed Wio Terminal. Followup work could include the development of an application or dashboard to render the time-series data from the flowmeter, highlight possible clogs or reduced flow readings, or integrate into a larger pipeline management system.
Use machine learning classification to monitor the operation of a 3D printer and look for anomalies in movement, with the Reloc / Edge Impulse BrickML device.
Created By: Attila Tokes
Public Project Link: https://studio.edgeimpulse.com/public/283049/latest
BrickML is a plug-and-play device from Edge Impulse and reloc, meant to be a reference design for Edge ML industrial applications. It is designed to monitor machine health, by collecting and analyzing sensor data locally using ML models built with Edge Impulse.
In terms of specifications BrickML comes with a powerful Cortex-M33 micro-processor, 512KB RAM and various storage options for code and data. It has CAN, LTE, UART, I2C and SPI interfaces, and supports wired and wireless connectivity over USB, Ethernet and Bluetooth 5.1. A wide selection of onboard sensors can readily be used in projects. We get a 9-axis inertial sensor (Bosch BNO055), a humidity and temperature sensor (Renesas HS3001), a digital microphone (Knowles SPH0641LU4H-1) and ADC inputs for current sensing.
BrickML comes with seamless integration with Edge Impulse Studio. The device can be used both for data collection, experimentation and running live ML models.
BrickML is designed to be ready to use out-of-the-box. All we need is connect the device to a Laptop / PC using the provided USB Type-C cable.
On the Laptop / PC we can use the Edge Impulse CLI tool set to interact with the BrickML device. To install it follow the Installation guide from the documentation.
Once the Edge Impulse CLI is installed, we connect to the BrickML by plugging it to an USB port, and running the edge-impulse-daemon
command:
If we are not already logged in, edge-impulse-daemon
will ask our Edge Impulse Studio email and password. After this the BrickML should be automatically detected, and we will be asked to choose a Studio project we want to use.
Once connected, the BrickML will show up in the Devices section of our Edge Impulse Studio project, and it should be ready to be used for data collection and model training.
For the purpose of this tutorial, I choose to mount the BrickML on a 3D printer. The idea is use the BrickML for anomaly detection. For this, first we will teach the device how the 3D printer normally operates, after which we will build an anomaly detection model that can detect irregularities in the functioning of the 3D printer.
Installing the BrickML to the 3D printer was fairly easy. The BrickML comes in a case with four mounting holes that can be used to mount the device on various equipment. In the case of the 3D printer, I mounted the BrickML to the frame using some M4 bolts and T-nuts.
After the BrickML is mounted, we can go ahead an create a project from our Edge Impulse projects page:
As some of the (optional) features we will use require an Enterprise account, I selected the aforementioned project type.
Note: the steps I will follow in this guide are generic, so it should be easy to apply them on similar projects.
The first step of an AI / ML project is the data collection. In Edge Impulse Studio we do this from the Data acquisition tab.
For this tutorial, I decided to collect Inertial sensor data for 3 labels, in large chunks of about ~5 minutes:
printing - 7 samples, 35 minutes of data
idle - 2 samples, 10 minutes of data
off - 1 sample, 5 minutes of data
In the printing
class, I used a slightly modified G-code file from a previous 3D print, and re-played on the printer. The idle
and off
labels are a baseline to be able to detect when the 3D printer does nothing.
The collected samples were split into smaller chunks, and then arranged into Training and Test sets with close to 80/20 proportion:
Now that we have some data, we can continue with the Impulse design step. The Impulse represents our machine learning pipeline, which includes data collection, pre-processing and learning stages.
For this tutorial I went with the following blocks:
Time Series Data input
with 3-axis accelerometer and gyroscope sensor data, 100 Hz frequency, 4 sec window size + 1 sec increase
a Spectral Analysis processing block
to extract the frequency, power and other characteristics from the inertial sensor data
a Classification learning block
that classifies the 3 normal operating states
an Anomaly Detection learning block
capable of detecting states different from normal operation
Output Features consisting of
Confidence scores for the 3 classes
Anomaly score that indicates unusual behavior
The Spectral Analysis processing block is used to extract frequency, power and other characteristics from the sensor data. It is ideal for detecting motion patterns in inertial sensor signals. In this project we are using it to process the accelerometer and gyroscope data.
After saving the parameters, we can head over the Generate features tab and launch spectral feature generation by hitting the "Generate features" button. When the feature generation job completes, a visual representation of the generated features is shown in the Feature explorer section:
As we can see the features for the printing
, and idle
/ off
classes are well separated.
After the feature generation the next step is to generate a Classifier. Here we will train a Neural Network using the default settings, which consists of an Input layer, two Dense layers, and an Output layer:
The training can be started by using the "Start training" button. After a couple of minutes we are presented with the results:
As we can see we obtained an accuracy of 99.8% with printing
, idle
and off
states well separated. We have a small number of idle
and off
samples overlapping, but this is expected as the two categories are quite similar.
Anomaly detection can be used detect irregular patterns in the collected sensor data. In Edge Impulse we can implement anomaly detection using one of the two available anomaly detection blocks. For this project, I decided to go with the Anomaly Detection (GMM) learning block.
In terms of parameters, we need to select a couple of spectral features we want to use for the anomaly detections. After a couple of tries, I went with 10 components with the RMS and Skewness values from the Accelerometer and Gyroscope sensors selected as features.
Note: by default, the selection spectral power features for some specific frequency bins. I decided not to use these as it is not guaranteed that real world anomalies will contain these certain frequencies.
After setting the parameters, the anomaly detection is trained in the usual way, by clicking the "Start training" button.
In the output we should see that the samples for known classes are in well separated regions. This means the model will be able to easily detect irregularities in the input.
Once the training of our model is done, the next step is to test the model. Here, we can evaluate the model against our Test dataset, and we can also test it live on the BrickML device.
We can see that we got a very good accuracy of 99+%, with a small number of uncertainties between the idle
and off
states.
As the model works as expected, we should try Live classification on newly sampled data from the BrickML device. For this, first we need to connect to the BrickML device, either using edge-impulse-daemon
or Web USB. After this, we can start collecting some sensor data, by hitting the "Start sampling" button with the appropriate parameters:
I tested the model in various conditions. The below screenshot shows the results when running a print:
During live testing we can also check out the Anomaly Detection feature. For this I gave the printer a little shake. The result of this is that the Anomaly score skyrockets, indicating that some irregularity was detected:
The final stage of the project is to build and deploy our Impulse to the BrickML device.
To build the image we can go to the Deployment tab. There, we need to select the BrickML / Renesas RA6M5 (Cortex-M33 200MHz) as the target, and click the Build button:
Optionally, we can enable the EON™ Compiler, which is a way to tune the model we build to the target device we selected.
The build will complete in a couple of minutes, and the output will show up the Build output section, and it will be ready to download.
The output is a .zip
archive containing two files: a signed binary firmware image, and an uploader script.
The new firmware can be uploaded to the BrickML using the provided ei_uploader.py
script, by running the following command:
After a quick reboot / power cycle we should be able to launch the model using the edge-impulse-run-impulse
command.
Here is quick video showing the BrickML in action, while running the model:
As our example shows, the BrickML is very capable device, that can be used to implement Edge ML solutions with very little development effort.
Using BrickML and Edge Impulse Studio we can easily collect sensor data and train an ML model. The resulting model can be rapidly deployed to the BrickML device, which then runs the inference in real time.
To integrate BrickML into an existing solution, we can use the AT interface it exposes, or we can also chose to the extend its firmware with custom functionality.
BrickML Product Page, https://edgeimpulse.com/reference-designs/brickml
Edge Impulse Documentation, https://docs.edgeimpulse.com/docs/
Using a Nordic Thingy:53 to monitor vibrations and perform predictive maintenance on an industrial compressor.
Created By: Zalmotek
Public Project Link:
https://studio.edgeimpulse.com/studio/135470
GitHub Repository:
https://github.com/Zalmotek/edge-impulse-predictive-maintenance-vibration-thingy-53-nordic
Data and analytics are used in predictive maintenance to foretell when equipment may fail. Predictive maintenance can help you save money by spotting possible issues before they become costly problems and require repairs. Monitoring the device's temperature, sound levels, and vibration level are three methods for anticipating machinery failure, each of them being appropriate for a certain type of machinery.
Vibration data is a great dimension to monitor in machinery that has moving parts. When such machinery starts manifesting anomalous vibration patterns, a possible malfunction may have occurred and critical equipment failure may be inbound. Such modifications may take place over a span of hours or days and they are seldom picked up by human operators. By harnessing IoT devices and machine learning algorithms, such phenomena can be detected and maintenance teams may be alerted before machinery failure takes place.
In industrial settings, compressors are used to provide air to power air tools, paint sprayers, and abrasive blast equipment, to phase shift refrigerants for air conditioning and refrigeration and to propel gas through pipelines.
Fundamentally, an air compressor is a pump that pulls air from the atmosphere and pressurizes it into a reducing volume. The two most common types of compressors are piston compressors, in which a piston moves up and down in a cylinder, drawing air on the downstroke, and rotary screw compressors that employ a set of helical screws to draw air from the atmosphere.
In our application, the compressor is used in a laser cutting machine to eliminate all the debris and cool the material at the point of contact between the workpiece and the laser beam. Failure of doing this may lead to ruining the workpiece, as the material will warp near the laser beam and also pose a structural risk to the whole machinery as the debris might accumulate and ignite from the heat.
There are not many ways of preventing such accidents from happening, except doing routine preventive maintenance procedures on the compressor unit, like changing the oil, the gaskets and the tubing.
The whole principle of operation of a compressor being based on moving parts, any eccentricity or imbalance will be characterized by a different vibration pattern compared to normal functioning regime.
To address this, we will be developing a predictive maintenance solution that gathers vibration data from an oil-less compressor and uses machine learning algorithms to detect if the piston is unbalanced or if the compressor manifests an anomalous behavior.
USB-C cable
nRF Programmer Android/IoS App
Edge Impulse account
Git
For this application, we will be using the Thingy:53, a prototyping platform developed by Nordic Semiconductor, based on the nRF5340 SoC, packed with temperature, humidity, AQ and color sensors, alongside a high precision accelerometer and a MEMs microphone.
The application processor is performance-optimized and can run at either 128 or 64 MHz thanks to voltage-frequency scaling. It has a DSP instruction capability, a floating-point unit (FPU), an 8 KB 2-way associative cache, 1 MB Flash, and 512 KB RAM. The network processor operates at 64 MHz and is built with low power and efficiency in mind (101 CoreMark/mA). It has 256 KB Flash and 64 KB RAM. This makes it a great pick for developing edge ML applications. Moreover, if the use case requires it, a communication layer can be added over the detection algorithm, the Thingy:53 being capable of Bluetooth LE, Bluetooth mesh, Thread, Zigbee and Matter.
Having everything on one single prototyping platform, there is no wiring needed. Just attach the board to the device you wish to monitor and connect it to a computer using a USB cable.
Let's start by developing an Edge Impulse project.
Log into your Edge Impulse account, pick Create new project from the menu, give it a recognizable name, choose Developer as the project type, and then click Create new project.
Afterward, select Accelerometer data as the type of data you will be dealing with.
The Nordic nRF Edge Impulse iPhone and Android apps will work with new Thingy:53 devices right out of the box.
The firmware of Thingy:53 needs to be updated before it can be connected to the Edge Impulse project. Launch the nRF Programmer mobile application after downloading it from Apple Store or Google Play. You will be presented with several available firmware that can be uploaded on the board.
Select the Edge Impulse application and tap Download. Afterward, hit Install. A list with all the nearby devices will show up and you must select the Thingy:53 board that you wish to program.
With the firmware updated, connect the Thingy:53 board to a computer that has the edge-impulse-cli suite installed, turn it on, launch a terminal and issue the following command:
Fill in your username and password when asked to.
Afterward, select the project you wish to attach the board to and hit Enter.
If everything is successful, the Thingy:53 will show up in the Devices tab with a green dot, signaling that it is online and ready to gather data.
After the board shows up in the Devices tab, navigate to the Data acquisition to start gathering data. Your device will show up in the Record new data window. Write down a label that corresponds to the phenomenon you are capturing, use 10000ms as the Sample length, Accelerometer as Sensor and use a Frequency of 100Hz. With everything set up, tap Start sampling.
For this application, the Nordic Thingy:53 will be mounted directly on the compressor unit by using a strong adhesive and we will be recording data for 2 classes: “Unbalance_In_Rotating_Parts” and “Normal_Operation”. The “Normal_Operation” class is very important because neural networks can only "understand" the training data that was used to create them, and any new data that will be presented to them will have to end up in one of the defined categories.
For this model, aim for around 4 minutes of data for each class. Every time you record a new data entry, it will show up in the Collected data field, and the time-domain representation of the signal will be displayed in the Raw Data window.
During normal operation, the compressor manifests a vibration with a low amplitude, with rhythmic increases in amplitude once every cycle, followed by a slow reduction back to normal levels.
When the piston is damaged or unbalanced, notice how multiple rhythmic spikes appear in the signal.
After you have gathered at least 2 minutes of data for every, the data must be split between two categories: Training dataset and Testing dataset. An adequate split ratio should be 80% Training data to 20% Testing data.
After the datapool is populated, it’s time to create the Impulse. An Impulse is an abstraction of the process of gathering data, processing it, feeding it into a neural network and outputting it, each step of the process being customizable.
For this application, we will be using an input block with a 2000ms window size, with a window increase of 200ms at an acquisition frequency of 100Hz, a Spectral Analysis block as our processing block and a combination of learning blocks, a Classification(Keras) and an Anomaly Detection block.
The Anomaly Detection block is necessary for this application because not all failures that may occur during normal operation of the compressor can be simulated.
The Spectral Analysis block is used to extract the frequency and power characteristics of a signal. Low-pass and high-pass filters can be used in this block to eliminate undesirable frequencies. As with our use case, this block typically performs well when decoding recurrent patterns in a signal, such as those caused by the vibrations or motions picked up by an accelerometer unit.
For the moment, the default parameters value set by Edge Impulse offer great results. Leave everything as is and click Save parameters. To figure out if the settings used are good for your dataset, explore the datapool and see if for similar data you get similar results.
After you are redirected to the feature generation tab, check “Calculate feature importance” and then press Generate features. The ability to calculate the importance of each individual signal feature is a great asset of the Edge Impulse platform, as it allows the Anomaly Detection block to prioritize those values as they are the most meaningful for the observed phenomenon.
The Feature explorer allows you to quickly check if the data separates nicely, as it is a visual representation of all the data from the Training dataset. Any point in the feature explorer can be hovered over to reveal the source for that point. If you work with time series data, clicking on a data item will show you the raw waveform, the signal window that was utilized, and a quick link to the signal processing page. This makes identifying the outlier data points in your dataset very simple.
The NN Classifier block's configuration is the next phase in the development of the machine learning algorithm. The number of training cycles, learning rate, size of the validation set, and whether or not the Auto-balance dataset function is enabled are just a few of the factors that can be modified. They provide users control over the number of epochs the NN is trained on, how quickly the weight of the links between neurons is modified each epoch, and the proportion of samples from the training dataset that are used for validation. The architecture of the NN is detailed underneath.
Leave everything on default settings for the time being and click Start training.
After the training has been assigned to a cluster, the training performance tab will be displayed. Here, you can view in tabulated form the correct and incorrect predictions made by the model after being presented with the Validation data set. When training a neural network, we aim for a high Accuracy (the percentage of predictions where the expected value matches the actual value of the data input) and a low Loss (the total sum of errors produced for all the samples in the validation data set).
Underneath those performance indices, you can visually explore the data to find the outliers and the mislabeled data. You can see that in the right side of the graphic there is a small cluster of “Unbalance_In_Rotating_Parts” data points that were mislabeled, represented with red dots.
A secondary neural network called the Anomaly Detector will be used to identify data that does not fall into any of the categories we established in the previous step.
By enabling the Generate Feature importance during the Generate Feature step, the users can greatly improve the performance of this Neural Network and drastically reduce the processing resources needed for using it.
Click on Select suggested axes. As you can see, the accY Spectral Power 3.12-9.38Hz, the acY Spectral Power 3.12-9.38Hz, accz Spectral Power 15.62-21.88HZ and accY Spectral Power 3.12-9.38HZ are the most meaningful characteristics in our dataset. Afterward, with the axes selected, press Start training.
You will be provided with the training results after the training is complete. You can see that the Anomaly Explorer defines zones around the acquired data and plots the two most significant features against one another. The same coordinate system is used to plot new data, and if it is not located near one of the predefined clusters, it is marked as an anomaly.
The Model Testing tab allows users to quickly evaluate how the machine learning model fares when presented with new data. The platform uses the data available in the Test data pool, defined during the data acquisition phase and evaluates the performance of the model.
Edge impulse allows its users to export the machine learning model they have just created in the form of a pre-compiled binary that can be easily uploaded on the board, without going through the effort of building custom firmware for it. To do so, click Build and wait for the process to end. Once it’s done, download the .hex file and follow the steps in the video that shows up to upload it on the Thingy:53 board.
Connect the board to your computer when the impulse has been uploaded, open a Terminal, and type the following command to view the inferencing results:
An alternative and easy way of quickly deploying the model on the edge is using the Nordic nRF Edge Impulse app for iPhone or Android:
Download and install the application from Google Play/Apple Store.
Launch the application and login with your Edge Impulse credentials.
Select your Predictive Maintenance project from the list:
Navigate to the Devices tab and connect to the Thingy:53:
Navigate to the Data tab and press Connect. You will see the status on the button changing from Connect to Disconnect.
Navigate to the deployment tab and press Deploy.
In the Inferencing tab, you will see the results of the Edge Impulse model you have flashed on the device:
The EON compiler enables you to run NN with up to 35% less storage and 25-55% less RAM without compromising model performance.
At the bottom of your project's deployment page, on all supported boards, this feature is automatically turned on.
Usually, microcontroller targets have less than 128K of RAM and implicitly, have trouble running machine learning models on the edge, especially when other functionality is added, like a communication layer or special conditions. EON Compiler allows users to overcome that impediment by greatly optimizing the resources needed to run said models on the edge.
By employing IoT devices powered by machine learning algorithms running on the edge, predictive maintenance is closer to becoming a common practice in industrial environments, making it cheaper, more accessible and more powerful than ever. While simple in their principle of operation, predictive maintenance systems improve the Overall Equipment Effectiveness and positively impact the equipment Remaining Useful Life (RUL).
If you need assistance in deploying your own solutions or more information about the tutorial above please reach out to us!
Sample data from a BLDC motor controller and apply machine learning to receive predictive maintenance alerts.
Created By: Avi Brown
Public Project Link: https://studio.edgeimpulse.com/public/102584/latest
Brushless DC (BLDC) motors' high performance and efficiency have made them one of the most popular options for industrial / robotics applications. In order to drive BLDC motors a dedicated motor controller is needed, and there exist many controller manufacturers like ODrive and Roboteq, to name a couple.
In addition to offering precise motor control, these controllers often expose a number of performance properties to the engineer, including motor velocity, torque, power consumption, phase current, temperature, and more. This poses an excellent opportunity to use non-intrusive embedded machine learning to add an extra layer of sophistication to the system.
In this project we will learn how to:
Collect data from an ODrive motor controller (though any motor controller that allows querying power data can be used)
Import our data into the Edge Impulse studio using the Data forwarder
Discover how to create a K-means anomaly detection model that is small enough to run on a Raspberry Pi Pico
Finally we will see how to use the Arduino library generated by Edge Impulse and combine it with our own custom code. Let's go!
Most of the magic is going to happen in the Edge Impulse studio, so if you're following along be sure to open a free account! For those who don't know, Edge Impulse is an engineer-focused platform designed to streamline the process of building machine learning models for edge / embedded devices.
Next, for our motor setup we will be using an ODrive V3.6 (24V) controller with a brushless motor (D5065) from the same company. ODrive manufactures affordable yet robust motor control hardware that can be interfaced with via USB or UART serial connection.
Any motor controller that allows querying power data over serial connection can be used!
Finally, for our main computing unit we will be using a Raspberry Pi Pico. This tiny board packs an RP2040 (Arm Cortex-M0+) microcontroller. Be sure to check out Edge Impulse's official guide on this board.
...let's define our goal here. This project is meant to serve as a "get started" reference for leveraging Edge Impulse and TinyML to perform industrial motor predictive maintenance (PdM). From a quick glance at the ODrive developer reference document it is plain to see that there are plenty of data elements that we could choose to create features and build our machine learning model, but regardless of what you choose for your specific use case the steps should be similar.
For this tutorial we'll be using motor power as the driving parameter for our predictive maintenance model. Without using any additional sensors and relying on motor power data alone we can create an anomaly detection model that can alert when faults are detected.
In future tutorials we will explore the use of classification models to recognize specific faults such as bearing faults, axle misalignments, axle disjoints, etc. For now we will train our model using nominal, intact data, and use anomaly detection to detect behavior that falls outside of the norm.
What can motor power data tell us?
Ohm's law tells us that power = current * voltage, therefore tracking a motor's power consumption allows us to consider the behavior of both current and voltage concurrently. A popular method used to monitor motor behavior is "Instantaneous Power Signature Analysis", or IPSA. Essentially a motor's power is analyzed in the frequency domain in order to uncover external interference - whether mechanical or electrical. You can read more about IPSA in this academic article: Predictive Maintenance by Electrical Signature Analysis to Induction Motors. 10.5772/48045. Bonaldi, E.L. & Oliveira, Levy & Borges da Silva, Jonas & Lambert-Torres, Germano & SILVA, L.E.. (2012). [page 500].
In this tutorial we will be using spectral analysis to generate features for our anomaly detection model.
Data forwarder
One of Edge Impulse's most convenient tools is their data forwarder. This tool allows you to stream data directly to the Edge Impulse platform regardless of the data source. We will use an Arduino script to create a data stream that the data forwarder can listen to.
It's recommended to check out the official guide on Edge Impulse's data forwarder here, and check out the ODrive specific data forwarding Arduino script attached to this tutorial ("odrive_data_forwarding.ino")!
We'll run the attached code on our Arduino while it's connected via UART to our ODrive board. You can read about the ODrive UART interface here.
We don't need to use any external sensors (no accelerometers or microphones here!) - we can use the motor controller's built-in parameters and circuitry to gather powerful data.
Collecting data
We need to collect data to train our machine learning model. For this tutorial we will be recording only nominal performance. That means there is no need to perform fault simulation in order to detect "faulty" data, which can be an intrusive and dangerous process. The anomaly detection model will tell us when the power signal from the motor is behaving in a way that the model hasn't seen before.
There are plenty of ways to upload training data using Edge Impulse. For example, it's possible to upload .CSV files with the relevant data, or you can use the convenient Edge Impulse Data Forwarder to stream data live over serial, which is what we'll be doing in this tutorial.
Let's say we set a motor rotating with random changes in velocity in order to simulate the motion of a robotic arm or something of the like. Here's a video of what that might look like.
Let's take a look at 1 second of power consumption data from this motion:
Since the commanded motor velocity is constantly changing, so is the motor power signal.
Assuming you've gone through the Getting Started guide and have made an account with Edge Impulse, go ahead and make a new project.
Following the guide for using Edge Impulse's data forwarder we should see our virtual device appear when we click Data acquisition:
Let's set our motor in motion (as shown in the video above) and click "Start sampling" to begin sending data to the Edge Impulse platform. Feel free to leave this running for a long time -- the more data the better!
It's time to create an impulse, which is comprised of a series of configuration / processing blocks. First up is setting the type of data we're using, which in this case is "Time series data". This block will split our long samples into shorter windows. Next we decide what type of processing we want performed on our data. In this tutorial we'll be using "Spectral analysis", which looks at the behavior of the signals in the frequency domain. Finally we decide what sort of neural network we want to feed the results of the spectral analysis to. For this we will select "Anomaly detection (K-means)":
This impulse is the heart of our model. Once we click "Save Impulse" we can move to the "Spectral analysis" screen (select from menu on the left).
Spectral analysis
When we train our machine learning model we're not actually feeding raw, signal level samples to the model, rather we feed it features generated by digital signal processing. Using spectral analysis we can create sets of information about how our signal behaves in the frequency domain.
After we click "Save Impulse", let's navigate to the "Spectral analysis" window. Here we can make adjustments to the DSP block in our impulse. Among other things we can set filter types and immediately view the filtered data. The default filter setting is a low pass filter, but this can and should be adjusted according to the type of anomalies the engineer is trying to detect.
Once we're happy with our DSP block settings, we can click "Save parameters" and then navigate to the next screen - "Generate features".
Now we're ready to apply the signal processing to our data. In the "Generate features" screen, click "Generate features" and wait a few moments for Edge Impulse to create spectral features for each one of the samples:
We're ready to move on to the next block where we create our machine learning model. We're almost done! Once we've generated the DSP features we can navigate to the next screen "Anomaly detection" from the menu on the left.
On this screen we can set the number of clusters, as well as select the axes according to which our data will be clustered. For this example all axes were selected, but if you know that certain axes are more / less important it's best to select them accordingly (this can be determined by using samples where the motor is experiencing faulty behavior and using the Calculate feature importance _option in the Generate features section. More on this here.)
In the graph above, each sample in represented by a dot, and the surrounding circles represent the clusters. These clusters can be thought of as regions of typical behavior. Our model will notify us not only when a new sample falls outside of the clusters, but by how much!
Our model is ready for deployment in the form of our choosing! For this tutorial we'll export our model as an Arduino library that we can invoke from within our custom Arduino scripts. Navigate to the "Deployment" screen, select "Arduino library", and hit "Build" on the bottom of the screen.
It's recommended to follow this guide to learn how to call your Edge Impulse model from within Arduino code. The example in the guide is for use with an accelerometer, but the principle is the same!
Using the Arduino IDE we will need to import the .ZIP folder into the Arduino library folder using Sketch -> Include library... -> Add .ZIP folder... Once we add the .ZIP folder from Edge Impulse we can import our inferencing library like this:
(You can peek inside the project .ZIP folder inside src to see the exact name of the header file!)
Combining the example code from the guide referenced above with custom code for gathering data from our ODrive (or whatever motor controller you happen to be using!), we'll end up with something like this. Please feel free to ask questions about the code if something is unclear.
Identify leaks by using machine learning and a flowmeter to measure and classify the flow of liquid through a pipe.
Created By: Shebin Jose Jacob
Public Project Link: https://studio.edgeimpulse.com/public/166615/latest
GitHub Repository:
https://github.com/CodersCafeTech/Fluid-Leakage-Detection-With-AI
Fluid leakage in industrial pipelines can have serious and potentially destructive effects on both the environment and human health.
One of the most significant impacts of fluid leakage is the potential for contamination of soil and water sources. Many industrial fluids, such as oil and chemical compounds, are toxic and can have devastating effects on the ecosystems they come into contact with. For example, an oil spill can contaminate soil and water, leading to the death of plants and animals that depend on these resources. The cleanup process for such a spill can also be expensive and time-consuming, with long-term consequences for the affected area.
In addition to the environmental impacts, fluid leakage can also pose serious health risks to humans. Some industrial fluids, such as chemicals and gases, can be harmful when inhaled or ingested. Even small amounts of these substances can cause serious health problems, including respiratory issues, skin irritation, and even cancer.
Fluid leakage can also cause damage to the pipelines themselves, leading to costly repairs and downtime for the industrial facilities using them. In some cases, the leakage can even lead to explosions or fires, which can cause further damage and put workers and nearby communities at risk.
Overall, fluid leakage in industrial pipelines is a serious issue with far-reaching consequences. It is important for industrial facilities to take steps to prevent leakage and properly maintain their pipelines to minimize the potential for harm to both the environment and human health. This can include regular inspections and maintenance, as well as implementing safety protocols and training workers on how to handle potential leaks. By addressing this issue proactively, we can help protect our planet and keep our communities safe.
As a proposed solution to the issue of fluid leakage in industrial pipelines, we propose the use of artificial intelligence (AI) and machine learning. With this approach, flow rate sensor data is used to detect leaks in pipelines using machine learning algorithms that analyze changes in flow rates and identify deviations from normal patterns that may indicate a leak. This type of AI technology has the potential to significantly improve our ability to detect and respond to fluid leaks in industrial pipelines, helping to prevent undetected leaks from causing damage. In addition, the use of machine learning allows these systems to improve over time, becoming more accurate and reliable at detecting leaks. By leveraging these technologies, we can more effectively protect the environment and human health, and minimize the costs associated with leaks and their cleanup.
The development board used in this project is the Seeed Wio Terminal. The reason why we used this development board in this project is that is a complete system equipped with Screen + Development Board + Input/Output Interface + Enclosure.
We used a DFRobot Water Flow sensor to detect the flow state of liquid.
It measures the rate of a liquid flowing through it by using a magnetic rotor and a Hall Effect sensor. When liquid flows through the sensor, a magnetic rotor will rotate and the rate of rotation will vary with the rate of flow. The Hall Effect sensor will then output a pulse-width signal.
This is the flow setup and it's pretty simple. Two pipes are attached to the inlet and outlet of the Flow Sensor. We will collect data from the sensor:
When there is no flow
When there is a flow
When there is a leak
To set up your Seeed Wio Terminal for Edge Impulse, you can follow this guide. But we are using an alternative method to collect data. In our method, the data is collected as CSV files and uploaded to Edge Impulse. And then we proceed to the TinyML model generation as usual.
We have the water flow sensor which outputs a PWM signal. So instead of collecting the analog values from the sensor, we calculated the flow rate using an equation and it is collected as the time series data. We have collected the flow rates for no flow, normal flow, and leak which seem to be distinguishable by the model.
To collect data for your project, follow these steps:
Upload DataCollection.ino to Wio Terminal.
Plug your Wio Terminal into the computer.
Run SerialDataCollection.py in the computer.
Press button C
to start recording.
When you have enough data, press button C
again to stop recording.
Once you have stopped recording, it will generate a CSV file on your computer. Name it according to the flow state.
Upload the CSV file to Edge Impulse using the Data Acquisition Tab.
After uploading the CSV files, we split the whole data into samples of 6s length.
Then we split the whole dataset into training and testing datasets. Now we have a clean dataset to start our model training.
An impulse is a machine learning pipeline that takes raw data, does signal processing to extract features, and then employs a learning block to categorize new data.
Here we are using the Time Series Data as the input block. We are using Raw Data as the processing block. As we have to classify the liquid flow into different states, we are using Classification as the learning block. Now we have an impulse that takes flow rate as input and categorize the flow into one of three classes.
Now move to the Raw Data tab. We can change the parameters like the scale axis, but we are keeping the default settings and continue to Generate features.
The features are well separated and there is no overlap between the classes, which indicates that we have a good dataset for model generation. So, let's proceed to train the model.
Move on to the Classifier tab. Here we have 3 parameters to modify. First, leave the settings as it is and train the model once. In our case, it output a model with 30% training accuracy. So we tweaked the parameters many times until satisfactory training accuracy is attained. These are our Neural Network settings.
After training the model for 70 cycles with a learning rate of 0.002, we got an output model with 100% training accuracy and with a loss of 0.12.
Now we have a well-functioning model. Let's test its performance with some previously unseen data. Navigate to Model Testing and Classify All. Here are the results.
Amazing! We have got 100% testing accuracy. So our model is ready for deployment.
From the Deployment tab, build an Arduino Library. You can enable optimisations with EON Compiler if you like, but is optional.
The build will output a .zip
file containing the model and some examples. Add the library to the Arduino IDE using Sketch > Include Library > Add .ZIP library
Modify the static_buffer.ino
located at File > Examples > Your Project Name > static_buffer > static_buffer.ino to do dynamic inferencing.
After the deployment, now we have a system consisting of a Wio Terminal, Flow Rate Sensor, and AI model that can detect a possible leak in the pipeline. The three modes of output are shown below.
The entire assets for this project are available in this GitHub repository.
Measure both air quality inside a commercial printer, as well as motion / vibration, to identify potential issues before major outages occur.
Created By: Zalmotek
Public Project Link:
https://studio.edgeimpulse.com/studio/140871 - Vibration
GitHub Repo:
https://github.com/Zalmotek/edge-impulse-predictive-maintenance-vibration-commonsense-sony-spresense
Predictive maintenance can help you avoid costly downtime and repairs, by predicting when equipment is going to fail. This allows you to schedule maintenance before the problem occurs. Additionally, predictive maintenance can improve safety by identifying potential hazards before they cause an accident. This allows companies to take steps to prevent accidents from occurring. And last but not least, predictive maintenance can help avoid costly downtime and repairs by predicting when equipment is going to fail. This allows you to schedule maintenance before the problem occurs, instead of waiting for something to break.
The machineries present in a print shop include printers, copiers, and scanners. These machines are used to print, copy, and scan documents. Additionally, there are often other machines present in a print shop such as shredders and laminators.
One of our clients has a Xerox iGen4 machine in its print shop, and while the machine has been launched a few years ago when it's properly maintained and cared for it can still be used to print materials. Some common problems with Xerox iGen4 print machines include paper jams, toner issues, and printer errors. These problems can cause the machine to fail and result in lost production. Additionally, these problems can also be safety hazards if they are not fixed. While the unit has basic features to identify the above problems in their interface once you start using the machine there are some blind spots that you cannot safely detect.
The air quality in a print shop can be dangerous because of the chemicals used in the printing process. These chemicals can be harmful to your health if you are exposed to them for too long.
If a print shop has equipment that is predicted to fail, they can schedule maintenance and repairs before the equipment actually fails. This can help avoid costly downtime and lost production. Additionally, if predictive maintenance can identify potential hazards, the print shop can take steps to prevent accidents from occurring.
Our client needed some extra peace of mind and informed us where issues usually arise. We have chosen the Sony Spresense development board paired with the CommonSense expansion board to monitor vibrations and air quality by being placed directly inside the print unit in key points where they could detect issues and report them in real time.
The Sony Spresense development board is a processor developed by Sony for IoT and sensing applications. The main board can be operated alone or with the extension board. The Spresense uses Sony’s new chipset on the main board: the CXD5602 System on Chip (SoC) multi core processor with GNSS and the the CXD5247 power management and audio analog interface chip.
The CommonSense expansion board created by SensiEdge provides an array of very useful sensors that can be used with the Sony Spresense board to capture the data we are interested in, especially the vibration sensor and the air quality one.
Enclosure with wall mount options
Edge Impulse account
Arduino CLI
Edge Impulse CLI
Git
The Spresense main board has the following features: Sony’s CXD5602 Processor, 8 MB Flash memory, PCB with small footprint, Dedicated camera connector, GNSS (GPS) antenna, Pins and LEDs, Multiple GPIO (UART, SPI, I2C, I2S), 2 ADC channels, Application LED x 4 (Green), Power LED (Blue), USB serial port.
From the CommonSense expansion board we are interested in LSM6DS3: inertial module: 3D accelerometer and 3D gyroscope and the SGP-40: Air quality sensor:
The LSM6DS3 is a system-in-package featuring a 3D digital accelerometer and a 3D digital gyroscope. Enabling always-on low-power features for an optimal motion experience.
The SGP40 is a digital gas sensor designed for easy integration into air purifiers or demand controlled ventilation systems. Sensirion’s CMOSens ® technology offers a complete , easy to use sensor system on a single chip featuring a digital I2C interface and a temperature controlled micro hotplate, providing a humidity compensated VOC based indoor air quality signal . The output signal can be directly processed by Sensirion’s powerful VOC Algorithm to translate the raw signal into a VOC Index as a robust measure for indoor air quality. The VOC Algorithm automatically adapts to the environment the sensor is exposed to.
The CommonSense can be plugged directly in the Sony Spresense since its pins are matching perfectly.
The first step in setting up the build environment for the Sony Spresense board equipped with the Common Sense expansion board is installing the GNU Arm Embedded Toolchain.
Determine the latest version of the toolchain:
Download the archive from the official website:
Create a new directory to store the downloaded files:
And finally, extract the toolchain files to the newly created directory:
Add this directory to the Path environment variable:
And finally , run the following command to apply the changes:
Next up in setting up the build environment is installing Python 3.7:
Install the prerequisites for adding custom PPAs:
And then, add deadsnakes/ppa
to the local APT package source list:
Run:
And then:
Now that we have python 3.7 installed on our machine, it’s time to create a virtual environment. To do so, make sure you have pip installed:
And then issue the following command to install virtualenv:
With virtual env installed, run the following command to create an environment that runs Python 3.7:
There are a few modules that must be installed before moving on to building the firmware and flashing the board with it:
Everything is now in place to build and flash firmware on the Sony Spresense board.
To build a machine learning model that is able to detect trends in Volatile Organic Compounds levels in the air, characteristic to ink or solvent spillage in the printing industry, we will be using the Edge Impulse platform. Register a free account and then create a new project, give it a fitting name and press Create new project.
To connect the device to the edge impulse platform, you must first download the data forwarder firmware from here. Pick whatever firmware you wish, either the firmware used for measuring Volatile Organic Compounds level, or the one used to measure the vibration of the printer.
Launch a terminal, navigate to the software folder and activate the build environment:
Afterwards, build the firmware using make
:
After the build is successful, it’s time to flash the board by running:
Now, connect the board to the platform by using the Data Forwarder tool provided in the Edge Impulse CLI suite:
You will be prompted to fill in the username and password used to log in on the Edge Impulse platform, and then assign the board to one of the existing projects. After doing so, you will see the development board appear under the Devices tab with a green dot next to it, signifying the fact that it is online and ready for data acquisition.
In printing industries, there are numerous volatile organic compounds that can be found in the air inside and in the near vicinity of printing presses, like Isopropanol, Benzene, Ethyl-Toluene isomers and styrene. To emulate an ink and solvent spillage, we have exposed the Common sense board to varying concentrations of Isopropanol which, being a very volatile compound, easily vaporizes and is picked up by the sensor.
Once the device is connected, go to the Data acquisition page, choose the one axis sensor defined when starting the edge-impulse-data-forwarder tool, set the data acquisition frequency to 25Hz, and begin data recording.
We will define two classes for this application, "Ink_Leakage" and "Normal".
To prevent the detection algorithm from producing a false positive result, it is crucial to add the "Normal" class, which contains readings specific to the normal conditions of the environment in which the system would be implemented. When gathering data it is recommended to have at least 2.5 minutes worth of data for each defined class.
When recording data, you will notice that the board records the sensor readings in a buffer that is ultimately uploaded in the Edge Impulse platform and afterwards, you are presented in the Raw Data tab with the time-domain representation of the newly acquired signal.
It is noticeable that in normal working conditions, the sensor oscillates in a narrow channel, between 30000 and 31000. When an ink or solvent leakage takes place, the value abruptly drops to around 23000.
After you have gathered enough data for both classes, remember to click on the red triangle to rebalance the dataset. An optimal ratio would be 80% Training data to 20% Testing Data.
The testing data pool will be used at the end of the process to see how the machine learning model fares on unseen data, before deploying it on the edge, saving a great deal of time and resources.
For vibration data, we must name the 3 axes exposed by the board as X, Y and Z. Then ,when recording data, make sure the 3-axes sensor is selected. Use a Sample length of 10 seconds and leave the frequency on the default value.
When dealing with industrial machinery, running it in a faulty manner to acquire data specific to various malfunctions is out of the question. Instead, our approach is to collect plentiful data when the machine operates nominally, is idle or is powered off and creates an anomaly detection algorithm that will detect if something is out of order.
Just like when gathering VoC data, make sure you have at latest 2.5 minutes of data and perform an 80-20 split.
After populating the dataset it’s time to design the Impulse. The impulse allows the user to control the defining parameters parameters of the whole process of taking raw data from the dataset, pre-processing it into manageable chunks called "windows," extracting the relevant features from them using digital signal processing algorithms, and then feeding them into a classification neural network that puts them in one of the defined classes.
For the input block, we will be using time series data split into 5000ms Windows, with a Window increase of 1000ms and a frequency of 25Hz.
As a signal processing block we will be using a Raw data block and for the learning block, we will be using a Classification(Keras) block.
The Impulse for the anomaly detection algorithm is rather different from the one used in the VoC case.
For the input block we have decided to go with a 1s window and a 1s window increase. For the processing block we have used a Spectral Analysis block and as a learning block, we have picked an Anomaly Detection NN.
After clicking on Save Impulse, each block can be selected from the Impulse Design submenu and configured.
The Raw data block could be the most straightforward of the processing blocks because it has just one adjustable option, the "Scale axis," which we'll set to 15. The time-domain representation of the chosen sample may be seen on the upper side of the screen.
Afterwards, click on Save parameter and navigate to the feature generation tab. Nothing can be modified here so click on Generate features and wait for the job to end.
The Feature explorer lets you quickly assess if the data neatly separates because it provides a visual representation of all the data from the Training dataset. When you click on a data item, the raw waveform, the signal window that was utilized, and a direct link to the signal processing page are all presented. This makes it possible for you to locate the dataset's outlier data points quickly and determine what went wrong with them.
The next block in line is the NN Classifier block. Here you can control the number of training epochs, the rate at which the weights of the linkages between neurons are changed, and the percentage of training dataset samples that are used for validation. Additionally, if needed you can even change the structure of the neural network.
After clicking on Start Training a random value between 0 and 1 is assigned to the weight of each link between the neurons that make up the neural network. Then, the NN is fed the Training data set gathered during the data acquisition phase and the classification output is compared to the correct results. The algorithm then adjusts the weights assigned to the links at a rate defined in the Learning rate field and then compares the results once more. This process is repeated for a number of epochs, defined by the Number of training cycles parameter.
At the end of this process, the Classification Neural Network will be tested on a percent of the samples from the Training dataset held on the side for validation purposes.
Accuracy represents the percentage of predictions in which the result coincides with the correct value and Loss represents the sum of all errors made for all the samples in the validation set.
Underneath those performance indexes, the Confusion matrix presents in a tabulated form the percent of samples that were miscategorised. In our case, 31.3% of Ink_Leakage data points were mislabeled as Normal. This comes with a low rate of false positives as an advantage, but also with a low sensitivity to the phenomena it’s trying to detect.
Finally, the Feature explorer displays all the data from the Training dataset on a 2-axis graph and allows users to quickly determine what data points are outliers and trace back to their source by clicking on them and finding out why a misclassification might occur.
To extract relevant power and frequency characteristics of the accelerometer signal we will be using a Spectral Analysis block. Here you have the option to add low-pass or high-pass filters to remove unwanted frequency from the signal, to scale the axes and to modify the FFT length. It is worth spending some time on this signal processing block until the results are acceptable. A good rule of thumb here is that for similar input signals you must obtain similar processing results.
After clicking Save parameters, you will be redirected to the Generate features tab where you must make sure to check the Calculate feature importance option before clicking on Generate features.
The Anomaly Detector block is a great way of detecting any anomalous behavior of the machinery during it’s runtime. Click on Select suggested axes to greatly increase the performance of this NN and to reduce its resource usage, as it will only take into account the features identified in the previous step.
For our application, the algorithm identified the X RMS, X Skewness, X Kurtosis and X Spectral Power 0.41 - 1.22 Hz as the most important features. This algorithm groups similar data points in a predefined number of clusters. Then, a threshold value is detected based on which an area is defined around each of those clusters. When doing an inference, the NN computes the distance from the center of a cluster to the new datapoint and if it falls outside a cluster, aka the distance between the nearest centroid and the datapoint is greater than the threshold value, that that datapoint is registered as an anomaly.
The user can see how the neural network performs when presented with data that it has never seen before using the Model Testing page. After clicking Classify all, Edge Impulse will then feed the neural network with all the data in the Testing data pool and display, just like during training the training process of the NN, the classification results and model performance.
Another way of testing out your model is to use the Live Classification function of the Edge Impulse Platform. This tab enables the users to validate the machine learning model using data captured directly from any connected device, giving them a great overview on how it will perform when deployed on the edge. This is great to check if the device is mounted accordingly on the machine it monitors or if it has all the conditions it needs to run optimally.
The solution presented above allows you to schedule maintenance before the problem occurs, instead of waiting for something to break. Additionally, predictive maintenance can improve safety by identifying potential hazards before they cause an accident. In our case detecting unusual vibration in the printer and solvents or ink leaks can keep the workers safe and the workflow uninterrupted. The combination of the Sony Spresense and the CommonSense board can cover a wide array of use cases, we have only scratched the surface with the accelerometer and the gas sensor.
If you need assistance in deploying your solutions or more information about the tutorial above please reach out to us!
A machine learning model to detect a fault in a refrigerator by monitoring temperature and humidity.
Created By: Swapnil Verma
Public Project Link: https://studio.edgeimpulse.com/public/115503/latest
GitHub Repo: https://github.com/sw4p/Refrigerator_Predictive_Maintenance
A refrigerator is one of our home's most common and useful appliances. It has changed society and culture by improving the quality of life for people. Refrigerator has increased food accessibility, and food preservation has become so much easier, thus also reducing food wastage.
Refrigerator has another significant impact on the medical sector. It has made preserving and transporting certain medicines, including vaccines, easier, thus increasing accessibility. If a refrigerator storing medicine fails, it will spoil the medicines or reduce their effectiveness.
Considering the importance of a refrigerator in our lives, I am trying to make something to predict a refrigerator failure allowing predictive maintenance in this project.
My proposed solution is to use a machine learning (ML) model to identify a failure as soon as possible using the temperature and humidity changes in a refrigerator. Project Link: https://studio.edgeimpulse.com/public/115503/latest
A good machine learning model starts with a good dataset. Sadly, I could not find any open dataset of temperature and humidity levels inside a refrigerator, so I decided to build one.
A machine learning model needs at least two kinds of data to identify refrigerator failure.
Normal operation data - Time-series data from a normally working refrigerator.
Abnormal operation data - Time-series data from a faulty refrigerator.
Different kinds of faults may generate a different pattern in data. For example, a non-working compressor will never decrease the temperature when it stops working. In contrast, a clogged or dirty coil may force the compressor to work harder than usual, taking more time to achieve the target temperature.
Unfortunately, I do not have access to a faulty refrigerator for data collection; therefore, I have simulated "abnormal operation" data by
Keeping the fridge door open for an extended period.
This event should increase the temperature and hopefully force the compressor to work harder, thus simulating a fault state.
1. Dataset Preparation
The parameters I want to capture are:
Temperature
Humidity
Illumination - To check the door open/close status. The ML model will not use this parameter, and it is only to help us in visualising and understanding the data.
To capture the above data, I need:
A temperature sensor
A humidity sensor
A light intensity sensor
A microcontroller board
An SD card module
A battery
I already have an Arduino BLE sense with a temperature and a humidity sensor attached to an nRF52840 microcontroller; however, I did not have an SD card module for permanent data recording. For this, I used an Arduino portenta with a vision shield which has SD card support. My convoluted data collection and recording setup is illustrated in the figure below.
So this is my convoluted setup. The Arduino BLE sense does the data collection and data formatting and sends it over BLE to the Arduino Portenta, which then permanently records that data in a microSD card.
2. Software for Dataset Preparation
The software used in the Arduino BLE sense and Arduino portenta for the data collection is available from this GitHub page.
https://github.com/sw4p/Refrigerator_Predictive_Maintenance
The Dataset_Collector.ino is for the Arduino BLE sense, and the Data_Recorder.ino is for the Arduino Portenta H7 with a Vision Shield.
The Arduino BLE sense records temperature, humidity and illumination reading every 200ms. The illumination data is used to detect when the fridge door is open. If illumination is greater than 0, then the fridge door is open.
3. Data Visualization
The recorded data is in CSV (Comma Separated Value) format, and it looks like this.
Most of the data is collected continuously for 7 to 8 hours at an interval of roughly 200ms. Let's see what the data looks like by plotting it.
Here the orange plot is humidity, blue is temperature and grey (not visible because it is on top of the X-axis) is the illumination level in the fridge. Section A shows the temperature and humidity settling in a rhythm, and Section B shows the data in a rhythm after they have settled. Let's check another set of data.
The above graph also shows data collected for 7-8 hours. In the first graph, only temperature (blue) and humidity (orange) levels are shown, whereas, in the second graph, the illumination (grey) is also illustrated. As mentioned before, illumination is recorded to capture the door opening and closing of the refrigerator. Section A is temperature and humidity levels settling, and section B is the normal working of the refrigerator, showing the rhythm of heating and cooling cycles. Section C shows the sudden rise in temperature and humidity levels because I opened the refrigerator door in the morning.
The above image shows a zoomed-in view of the normal operation of a refrigerator. We can clearly see a cooling and heating cycle. Please note that this cooling and heating cycle takes place over a long duration.
So far, we have seen data showing the normal operation; let's check data showing the abnormal operation of a refrigerator.
In the above graph, section A is the settling period, and section B is the normal operation period. Section C is the simulated "abnormal operation" period, where the fridge door was kept open for a long duration. Section D shows the data after the fridge door was closed.
The temperature and humidity levels rose quickly when the fridge door was kept open. We can also see that compressor is trying to bring the temperature down, but it is taking a very long time, and as soon as the compressor stops working, the temperature level rise again quickly. It's almost inverse to the cooling-heating cycle of the normal operation.
4. Data Classes
As mentioned previously, due to the unavailability of a faulty refrigerator, I have simulated the abnormal operation using just one technique. That gives me only two classes of data - normal operation and abnormal operation. Let's make most of what I have got.
For training my ML model, I used Edge Impulse. Edge impulse is a fantastic tool for building ML solutions quicker.
Edge Impulse has many excellent features for all stages of building an ML solution. One such cool feature is Data Explorer. It makes visualising the data points very easy.
The above image shows the data explorer feature of the Edge Impulse. As you can see, I have three types of data a) normal_operation, which captures only the normal working of the refrigerator. b) Anomalous_DO, which captures only the abnormal operation and c) Combined, which captures normal and abnormal operation.
For the kind of data I have, an anomaly detection model would be perfect for this project. Thankfully, Edge Impulse provides a K-means anomaly detection model out of the box, so there is no need to prepare my own.
Please follow edge impulse documentation for Impulse creation, data pre-processing and training an ML model.
For testing the ML model, Edge Impulse provides two methods:
In this project, I have primarily used the model testing method because I already had a lot of data captured. In the data acquisition tab, I assigned some data as test data, which are only used in the model testing.
In the model testing tab, click on the classify all to test the model. You can also set the confidence thresholds by clicking on the three dots beside classify all.
As you can see from the model testing image above that the ML model is performing amazingly well. It is properly classifying a normal_operation sample as no anomaly and Anomalous_DO as all anomaly. It is also correctly classifying the combined samples into some anomaly and some no anomaly data points.
To closely examine a classified sample, click on the three dots on a sample and select show classification.
That will open a classification result page where you can scroll through the data points to evaluate individual classification window and their anomaly score. This page also has helpful graphs for visualising raw, pre-processed and classified samples.
As you can see from the above classification result, the ML model has absolutely nailed the classification. It is correctly detecting anomalous data from a combined sample.
It is not always this perfect, though. There are some outliers which slip through. For example, in the classification result below, the model has detected some anomalies and inspecting the raw data shows that they should not be an anomaly. However, it gives me great relief that such outliers are very low in number and can easily be removed using better sensors and improving the data quality.
Edge Impulse fully supports the Arduino Nano BLE sense development board, so the best way to deploy this ML model would be to build firmware.
Go to the deployment page -> click on the microcontroller board or environment of choice and click build. After building the firmware, the download should start automatically.
After extracting the zip folder, run the script_<os_name> file corresponding to your computer's operating system to flash the firmware onto the microcontroller board.
Predictive Maintenance of a motor pump using the Infineon PSoC 6 Pioneer Kit and CN0549 condition-based monitoring platform.
Created By: Pratyush Mallick
Public Project Link:
https://studio.edgeimpulse.com/public/189940/latest
GitHub Repository:
https://github.com/Pratyush-Mallick/ei_cypress_cn549
Pumps are critical equipment in every industrial operation. Pumps move liquids like beverages, dyes, and chemicals around production lines. They're also part of ancillary systems like hydraulics, lubrication, machine cooling, HVAC, and wastewater; all necessary to keep machines working, plant environments safe, and temperatures steady. Whatever pumps are doing, we want to keep them healthy!
For years, manufacturers have been practicing a preventive maintenance approach for industrial pumps. However, this method of monitoring pump health has previously been a potentially time-intensive and costly task due to manual inspections of each piece of equipment. When equipment is large in number and placed in less-accessible areas, the probability of failed equipment going unnoticed for an extended period of time is relatively high.
In industrial settings, pumps provide air to power tools, transfer steam for aluminum, paint sprayers, abrasive blast equipment, phase shift refrigerants for air conditioning and refrigeration, and propel gas through pipelines.
For some applications like steam making, the industrial pump must provide an optimal laminar liquid flow through the pipelines. Despite their harmless appearance, the bubbles in pumping systems are fundamentally distinct from those children typically blow with a wand. When pressure fluctuations inside the pumps give rise to minuscule bubbles, the ensuing collapse of these bubbles generates shock waves that are both powerful and constant. Over time, these recurring shocks wear down the components of the system through erosion or may lead to ruining the production output. Regarding processing systems, bubbling or bubble cavitation should be avoided at all costs.
To address this, we will develop a predictive maintenance solution that gathers vibration data from a motor pump and uses machine learning algorithms to detect any bubble cavitation being formed in the pumping systems, which is considered abnormal behavior.
The whole principle of operation of a motor pump is based on moving parts. The process of formation and collapse in cavitation is characterized by its rapid and violent nature, which results in a different vibration signal than a normally operating pump. When such machinery manifests anomalous vibration patterns, a possible malfunction may occur, and critical equipment failure may be coming. Such modifications may occur for hours or days, and human operators seldom pick them up. Such phenomena can be detected by harnessing IoT devices and machine learning algorithms, and maintenance teams may be alerted before machinery failure occurs.
Industrial Motor Pump
2 * USB-C cable
Python 3.10
For this application, we will use a condition-based prototyping platform developed by Analog Devices (CN0549) and PSOC 6 Wifi-BT Pioneer Kit by Infineon.
The CN0549 is a condition-based monitoring platform based around the integrated electronic piezoelectric (IEPE) standard, a popular signaling interface standard for high-end microelectronic mechanical systems (MEMS) and piezo sensors that are prevalent in industry today. The kit comes with a mechanical mount optmized for vibration fidelity. For setting up the board with the sensor and to learn more about the hardware, please refer to the links below:
The anchor of this solution is Infineon's PSOC 6 Wifi-BT Pioneer Kit. The application processor is performance-optimized and runs at 150 MHz, and the co-processor is an Arm M0 core, which can run at 100 MHz. Both cores are power-efficient. It has a floating-point unit (FPU), an 8 KB 2-way associative cache, 1 MB Flash, and 288 KB RAM. The board also has a capacitive sensing block and the capability of programable digital and analog blocks known as PSOC. It is an excellent pick for developing edge ML applications requiring a direct sensor interface. Moreover, board Wi-Fi support and a USB host device can be helpful for high-speed data logging.
Here are some pictures of the hardware before and after the assembly. Refer to the CN0549 reference guide for sensor-specific modifications, such as selecting coaxial wire and jumper settings. Switch the SW7
to position one on the MCU board side, ensuring that the sensor board is powered from kitprog2 stable VDD supply. Also, ensure the board is in daplink mode for easy debugging in the software section.
Let's start by developing an Edge Impulse project.
Log into your Edge Impulse account, pick Create new project from the menu, give it a recognizable name, choose Developer as the project type, and then click Create new project.
Afterward, select Accelerometer data as the type of data you will be dealing with.
The Industrial I/O subsystem is intended to support a device's analog to digital or digital to analog converters (ADCs, DACs) or even beyond. It provides a standard interface to talk to converters across different hardware and software platforms. Using libiio
and other application-specific wrapper libraries, the hardware can talk to any client-side number-crunching applications such as Python or Matlab. Since the firmware for CN0549 is based on the iio stack, we will use the pyadi-iio
package to collect data in Python from the firmware.
Before starting to build the dataset, you should install these first, or else the Python code might fail to run:
python 3.10
The default firmware available for CN0549 is based on mbed-OS and the iiod stack. Since our CY8CKIT-062-WIFI-BT is also mbed-enabled, we can port the existing code to our board with little modifications. I have already done that for you; you can directly import the firmware into the Keil studio cloud from the link below, which lets you build and debug the firmware inside the browser.
Some of the modifications done so far to make this compatible are listed below:
Changed the SPI mode in the user_config.c
and pin mappings in app_config_mbeb.c/.h
files. If you want to use different pins or configurations; you can change them in these files.
Disabled some of the wireless stacks in the mbed-OS to avoid conflicts. These can be found in the .mbedignore
file in the root of the directory.
Changed pin name in mbed-OS to avoid conflict with the Edge Impulse library. This happens explicitly when using the ARM compiler. With GCC, it builds fine. The modified mbed-OS can be found here. The mbed-OS for our firmware should be cloned from this repository when we import the code into Keil studio.
We're using the board as a USB host; you might need to connect another cable to another port.
Change the DAC code in the firmware to remove any DC bias. The IEPE accelerometer has a specific DC bias voltage that must be removed because this voltage does not carry any useful information. This is a crucial step to ensure that you're receiving reliable data. Even changing the length of the cable connected to the sensor can affect the DC bias. You can use the code and select Option 21 (Compensate Piezo sensor offset), which automatically compensates for voltage offsets in the sensor, giving more accurate data. The user should run this after connecting a new sensor.
Since most of the drivers and firmware for the CN0549 are C code and Edge Impulse is C++, we need to use EI_C_LINKAGE=1
flags to build the code properly. Some other flags and configurations can be found in the mbed_app.json
file.
Once the code has been imported into the Keil Studio, connect the board through the kitprog2 USB port, and the Studio should automatically detect the device, or else you can manually select the board and build the code.
Ensure that all libraries have been appropriately cloned. All the modified mbed-OS and Edge Impulse model libraries are linked through .lib
files with specific Commit IDs. Click the "Exclamation mark" shown in the picture below, and that should check out the libraries properly.
Please change the macro in the app_config.h
file depending upon the intended use. You can either use it for data logging or inferencing. In inference mode, the firmware returns the client two extra bytes of classification results and the accelerometer data.
Once all the modifications are done, build and load the code onto the hardware directly from the Keil Studio using the Run button or download the .hex file and drag-and-drop into the daplink drive.
If everything loads up correctly, you can connect the cable to the USB device port on the board, which should be detected.
To build the data set, we should collect the data from the device using pyadi-iio drivers in Python and then push it to Edge Impulse using the Ingestion API. You can find the Python drivers in the py_supplement
folder of the firmware repository or visit the following link.
This Jupyter notebook has all the initialization and data logging code. Run all the cells in the notebook one by one, except the last two cells. One is used for inference, and the other is for pushing data into Edge Impulse.
Here is a snapshot of the code. Add your HMAC
and API_KEY
into the code, and you can also set the sampling frequency (which needs to be in sync with the sampling frequency specified in the firmware) and then run the code.
Changing the block_size
will determine the sample length, which also depends on the sampling frequency. Here is a snapshot:
Your device will show up in the Device section in the Studio. Also, before running the code, set the label
in the script, and data will be arranged accordingly in the Edge Impulse Studio. You should be able to see your incoming data in the Data acquisition tab.
We have built a motor pump system with an inlet and outlet container to demonstrate the solution. For this application, the accelerometer is directly mounted on the wooden sheet near the motor pump unit for demonstration purposes in the hardware setup. The mounting block has a cross-section for inserting a screw into any hardware. You can also mount the accelerometer and mounting block directly on the pump. Once mounted, we record data for "Normal" and "Bubbling." For bubbling, we deliberately half-fill the container so air bubbles can form at the motor inlet.
Be sure to collect data with various motor power selections to avoid overfitting the model.
During regular operation, the motor pump manifests a vibration with a low amplitude, with rhythmic increases in amplitude once every cycle. However, when bubbling, there are multiple rhythmic spikes appearing in the signal.
Here are some videos demonstrating the data collection process:
After building the data set, it's time to create the Impulse. An Impulse is a symbolic pipeline of gathering data, passing it through a preprocessor, feeding it into a neural network, and outputting it, with each step of the process being customizable.
For this application, we will be using an input block with a 160ms window size, with a window increase of 80ms at an acquisition frequency of 32000Hz. A Spectrogram block is used as our Processing block, and a Classification block for our Learning Block, which is good for audio and vibrational data.
If you need help determining which blocks to select, you can always try out the Edge Impulse EON Tuner, which can validate different Impulse architectures and give insights on the suitable ones for your specific application.
For the sake of simplicity, there are only two conditions to detect; however, you can collect data for as many classes as you want and infer them.
The Spectrogram processing block extracts time and frequency features from a signal. It performs well on audio data for non-voice recognition use cases or any sensor data with continuous frequencies. Low-pass and high-pass filters can be used in this block to eliminate undesirable frequencies. As with our use case, this block typically performs well when decoding recurrent patterns in a signal, such as those caused by the vibrations or motions picked up by an accelerometer unit.
Under the Parameters tab, you can configure your spectrogram features or let the Studio do it by clicking on the "Autotune Parameters" button.
After being redirected to the Feature generation tab, check "Calculate feature importance" and press Generate features. Calculating the importance of each signal feature is a great asset of the Edge Impulse platform, as it allows the block to prioritize those values as they are the most meaningful for the observed phenomenon.
The Feature Explorer allows you to quickly check if the data separates nicely, as it visually represents all the data from the Training dataset. Any point in the Feature explorer can be hovered over to reveal the source for that point. If you work with time series data, clicking on a data item will show you the raw waveform, the utilized signal window, and a quick link to the signal processing page. This makes identifying the outlier data points in your dataset very simple.
The NN Classifier block's configuration is the next phase in developing the machine learning algorithm. The number of training cycles, learning rate, size of the validation set, and whether or not the Auto-balance dataset function is enabled are just a few of the factors that can be modified.
They provide users control over the number of epochs the neural network is trained on, how quickly the weight of the links between neurons is modified each epoch, and the proportion of samples from the training dataset that are used for validation. The architecture of the neural network is detailed, and can be changed as well.
Edge Impulse also provides options to augment the preprocessed data, which can help avoid overfitting the model, making it robust against a wide range of input data.
Leave everything on default settings for the time being, and click Start training.
After the training has been assigned to a cluster, the training performance tab will be displayed. Here, you can view in tabulated form the correct and incorrect predictions made by the model after being presented with the Validation data set. When training a neural network, we aim for a high Accuracy (the percentage of predictions where the expected value matches the actual value of the data input) and a low Loss (the total sum of errors produced for all the samples in the validation data set).
Underneath those performance indices, you can visually explore the data to find the outliers and the mislabeled data. You can see that on the right side of the graphic, there is a small cluster of "Normal" data points that were mislabeled, represented with red dots.
You can also explore different data type options for the model and their impact on model output parameters, such as RAM usage and accuracy.
The Model Testing tab allows users to quickly evaluate how the machine learning model fares when presented with new data. The platform uses the data available in the Test data pool, defined during the Data acquisition phase, and evaluates the performance of the model.
Since we're using our custom firmware, live testing is not supported directly. However, if you want to use the live testing option in the Edge Impulse Studio, you can go through the steps in the following link: https://docs.edgeimpulse.com/docs/edge-impulse-studio/live-classification
Edge Impulse allows users to export the machine learning model they have just created as a pre-compiled binary for the supported platforms without going through the effort of building custom firmware.
However, since our platform uses different sensors supported by the original board, we must download the model as a C++ library and then integrate the SDK into our firmware.
You can build the firmware with the SDK offline using the GCC compiler or Keil Studio Cloud online, as I did. You can drag and drop the Edge Impulse C++ library into the folder structure, or create a git repository and import it into Keil Studio as we did initially.
To view the inference results, you must rebuild the code with the inference macro enabled in app_config.h
file. This will build the code to transport inference results as part of the data stream on the host; on the client side, it should be able to separate it.
Run the Jupyter notebook with all the instructions as we did during data logging, except just one step: running the last cell, modified explicitly for data inferencing.
If everything goes well, Hurray!, you should see the streaming data along with classification in your Jupyter notebook.
Employing machine learning-based models for predictive maintenance can help us efficiently operate equipment, plan for downtimes, and increase longevity.
Baking intelligence into the edge and moving computing closer to where data is a paradigm shift from traditional computing, and Edge Impulse is at the forefront.
Imparting intelligence is just one aspect, and selecting a proper data collection platform is equally important as it can make or break a system. Platforms like CN0549, with their software and hardware scalability, provide a great path for data acquisition needs to build better tinyML models.
Use machine learning classification to monitor the operation of a DC motor, and identify fault conditions that indicate a need for maintenance.
Created By: Swapnil Verma
Public Project Link:
Nearly all machines require routine maintenance to maintain proper functionality. If not provided, they can break down abruptly. Sometimes, even between routine maintenance being performed, parts of a machine may fail. A failure of a mission-critical or high availability system can be disastrous for an organisation. To avoid such a scenario, a condition monitoring system is recommended for predictive maintenance, to help detect a potential failure in advance to possible reduce downtime.
Most condition monitoring systems are architected similar to the below image:
This generally works, but it has few potential problems:
Cloud services used in condition monitoring or predictive maintenance systems cost a lot of money in licensing and subscription fees.
Confidential data which an organisation may want to process and store on-site may also require an on-premise server to run the cloud services and software, which again adds to the cost of the system.
Microcontrollers are cheap and have a lot of computing power. They are not always used to their full potential. We are mostly using them for data capture.
A solution I am proposing focuses on the machine learning part of the conventional condition monitoring architecture. Instead of using cloud services for inferencing of the classification algorithm, we can use a microcontroller and TinyML.
Other parts of the architecture (e.g. database, dashboard, data ingestion model etc.) can also be replaced with in-house developed solutions, if desired.
To demostrate my solution, I prepared a test setup which requires the following components:
Syntiant TinyML board
A microSD card - Syntiant TinyML board requires this for IMU data collection.
A DC motor
A motor controller - For this application I have used the below items to prepare a motor controller circuit:
Arduino MKR WiFi 1010
MKR motor shield
A battery
Different loads to simulate normal and failure operations
A 3D printed workbench
I have used this hardware setup for data collection and testing the neural network model.
Note: Look close at the picture of the fan blades, as there are some that are intentionally missing blades in order to trigger imbalances and rotational movements that are abnormal from a "regular" fan, to simulate fault conditions.
To build this prototype, the following steps are required:
The computer in this setup performs multiple jobs. It is a UI for controlling the DC motor by a serial connection to the Arduino, and it is also a gateway for connecting to the Syntiant TinyML board with the Edge Impulse Studio.
A user can start or stop the DC motor via the Arduino serial connection, which then generates vibrations as it turns on and off. The pitch and amplitude of the vibration varies based on the load attached to the motor shaft.
The Syntiant TinyML board is physically attached to the motor mount. The board has a 6-axis motion sensor which picks up the vibrations generated by the motor and the load.
The vibration data is collected and sent to the Edge Impulse Studio for training and testing.
Edge impulse has simplified the machine learning pipeline extensively for TinyML. It has made data collection, training a model using collected data, testing the model, and deployment of that model back to the embedded board trivial.
Note: Plese make sure to download and flash the IMU firmware and NOT the Audio firmware.
After flashing the firmware run the below command:
This will start a wizard, and ask you to login and choose an Edge Impulse project. This is a good time to prepare a project in Edge Impulse if you have not already done so.
The above step should establish a communication between the Syntiant TinyML board and your Edge Impulse project. To verify that, navigate to the Devices tab of the project. You should see the Syntiant TinyML board listed, and the Remote managment column should have a green dot in it.
After establishing a connection between the Syntiant TinyML board and Edge Impulse Studio, we will now setup the Arduino and rest of the test bench for simulating a machine status.
Download the Arduino code from the below GitHub repository and flash it to the MKR WiFi 1010:
The Arduino takes the following commands via its serial connection: - MOTORON
: Turn the motor ON - MOTOROFF
: Turn the motor OFF - MOTORSPEED <-100 to 100>
: Change motor speed and direction based on the number provided.
After connecting a load to the motor, turn on the motor by sending MOTORON
via serial to the Arduino.
Now navigate to the Data acquisition tab in the Edge Impulse Studio. Here you will find the device we connected in the previous step, and the sensor list. Select the Accelerometer sensor and use the default parameters.
Add a Label name based on the load connected. If it is a balanced load then use Normal_Motion as a label, and for unbalanced loads (fans that are missing blades), use Error as a label. Labels are classes of your data.
Click Start Sampling, which will start the sample collection process. Once the sample is collected, it will be automatically uploaded to the Studio.
Repeat this process for unbalanced loads and also for the "Motor off" condition. Also make sure to collect a proportional amount of data per class.
Note: The Syntiant NDP chip requires a negative class on which no predictions will occur, in our example this is the
Z_No_Motion
(motor off condition) class. Make sure the class name is last in alphabetical order with the negative class at the end.
Once enough data is collected, split it into Train and Test datasets from the Dashboard:
After data collection, the next step will be machine learning model preparation. To do so, navigate to the Impulse design tab and add the relevant Preprocessing and Learning blocks to the pipeline.
The Edge Impulse Studio will automatically add an input block and it will recommend a suitable Preprocessing and Learning block based on the data type ((IMU and Classification, in this case). I have used the recommended ones in this project with the default arguments.
After Impulse design is complete, save the design and navigate to the Preprocessing tab (Spectral features in this case) for the feature generation.
Click on the Save parameters button, then navigate to the Generate features tab and click the Generate features button for data preprocessing.
After feature generation is complete (it could take a few minutes), please navigate to the Learning tab (Classifier in this case) to design the neural network architecture. Here again, I have used the default architecture and parameters recommended by the Edge Impulse Studio. After selecting a suitable training cycle and learning rate, click on the Start training button.
Once the training is complete, navigate to the Model testing tab, and click the Classify all button. This will begin the process of evaluating the built model against unseen data, which was the Test bucket of data set aside early when we did the split.
After testing is finished, the Edge Impulse Studio will show the model accuracy, and other parameters.
Even though it is a simple example, the Edge Impulse Studio prepared an excellent machine learning model just by using the default recommended parameters, in just a couple of minutes.
Once the training and testing is complete, we can convert the machine learning model into a library or binary for the Syntiant TinyML board and deploy it for local inference on the device.
Because the Syntiant TinyML board is fully supported by the Edge Impulse, this task is as easy as the previous procedures.
Simply navigate to the Deployment tab and select the target device. Also, specify the deployment type based on the options provided and click Build.
After building the binary or library, the Studio will automatically download the firmware to the computer, and provide guidance on how to flash it to the selected board. Usually it requires running the downloaded flash_<operating_system>
script, which should flash the binary onto the Syntiant TinyML board.
Once the firmware is loaded onto the board, you can run inference locally on the Syntiant TinyML board by using the Edge Impulse CLI to launch a runner application. With the TinyML board attached to your computer via USB, run the following command from a terminal to begin inferencing:
This will output classification results in the terminal, and you can verify that your model is properly predicting the normal, unbalanced, and off states of the motor.
At this point, you can iterate and build your own firmware, integrate the inferencing into your own application, and develop alerting capabilities to raise awareness of unexpected or out-of-bounds conditions.
With some physics and a TinyML model, add weight prediction to a pallet-wrapping machine.
Created By: Simone Salerno
Public Project Link:
In industrial settings, many factories need to handle pallets. It is a storage format that spans almost all sectors.
To speed up the packaging process, there is a machine that is devoted to wrapping the pallet contents into a plastic film to keep the contents tight and secured.
That's the sole purpose of this machine in the factory or production facility. But with the help of machine learning, we can upgrade these existing dumb-machines to add a new feature: weighing the pallets.
It may not appear that obvious, but we don't need a weight/pressure sensor to do this. Nor do we need to modify the circuitry or retrofit the machine.
Instead, we can use a "plug-in", external device that only consists of an accelerometer and a microcontroller.
And as we'll see shortly, this external device can even add predictive maintenance capabilities to the machine by pro-actively identifying malfunctions from the data and patterns collected.
The methodology behind this measurement technique is pretty simple: the pallet machine has a rotating motor at its core that is necessary to wrap the plastic film around the pallet.
During its rotation, the motor is subject to a friction that is proportionate to the weight on the platform. We can capture slight variations in the rotation pattern by means of an IMU.
We'll then use the accelerometer and gyroscope data as a proxy for the friction on the motor. By modelling this relation through machine learning, we aim to be able to predict the weight based on the IMU readings.
This will work wonderfully, because the machine always applies the same rotation force to the motor: if a large weight is on the platform, it will rotate slower than if the platform had no weight upon it.
Once we've modelled the relation between IMU data and weight, we can use it another way too: if we know the true weight of the pallet that's on the platform, we can compare it with the predicted weight, and look for discrepancies.
If they do not match by a large amount, it means that something is not working as usual. If the predicted weight is much higher than the actual one, it may mean that the motor is subject to more friction than it should be and that friction is not due to the pallet itself. Perhaps it needs to be oiled to work more smoothly, or some other issue is causing added strain on the motor.
The requirements are pretty simple: on the hardware side you only need an IMU and a microcontroller (or a board with an integrated IMU, such as the Arduino Nano BLE Sense).
To avoid using cables that may interfere with the operation of the machine, it is advisable to choose a board that has either WiFi or Bluetooth radio, so you can stream data to your PC wirelessly.
The setup is simple too: assemble your board with a battery in a plastic box, and anchor it on the rotating platform, near the edge of the rotating platter (at the border, linear velocity is greater than in the center, so the IMU can pick-up pattern variations more easily).
This project is articulated in 3 steps:
Collect training data
Design the Impulse
Deploy the model and use it
The first step is to collect training data for our model.
If using the Arduino Nano BLE Sense (or similar board with integrated IMU and BLE), you can use the following two code snippets: the first one has to be flashed on the board to enable the BLE data streaming, the second one has to run on your PC to receive the streamed data.
On the Arduino:
On your PC, you need Python to run the following script that connects to the microcontroller and saves the streamed data to a file:
To accurately model the IMU <-> weight relation, you need a few reference weights. How much of them and at what increments, depends on your use case.
For this guide, I collected data at the following weights (in kg):
0
40
80
120
160
200
240
280
320
430
600
1000
At lower weights (until 300 kg), I collected data at 40 kg intervals because I wanted to differentiate at a finer granularity. Then I increased the step to 100, 200 and 400 kg because at higher weights I only wanted to get a rough idea.
Feel free to customize your own scale as you see fit.
Warning: you can't expect to achieve a very fine granularity (eg. 1-5 kg) because the friction variation on the motor would be too small. Aim for 40-50 kg steps at least.
As with all Machine Learning projects, the more data you collect, the better. I collected 30 seconds of data for each weight at a 26 Hz sampling rate. If your IMU supports higher rates (most allows up to 104 Hz), you can go with that and test if it increases your overall accuracy. The longer the time that you collect data, the more robust your model will be.
For each weight on the machine, follow these procedures:
Put the microcontroller board on the platform and turn it on
Put the weight on the platform
Start the machine and let it go for a few seconds (so it reaches its normal speed)
Run the Python script and wait for the data collection to complete
Input a name for the CSV file that will contain data for the given weight
Repeat the process for each weight.
You will end up with a list of CSV files, one for each weight. This is an easy format to import into Edge Impulse.
Edge Impulse allows for 3 different tasks:
Classification
Regression
Anomaly detection
In our case, we want to model a continuous relation between the input (IMU data) and the output (weight), so it is a 'regression' task.
More specifically, this is a time-series regression task, so we will need to window our data and extract spectral features from it. This is most often the case when working with time series data.
The window duration depends on the working speed of your machine. My advice here is to go with a large duration, because we expect the rotation to not be very fast: if your window is too short, it won't contain much variation in data.
Nevertheless, this is mostly a trial-and-error process. Since Edge Impulse makes it so easy to experiment with different configurations, start with a reasonable value of 3-5 seconds and then tune based on the accuracy feedback.
The model doesn't need to be overly complex: start with a 2-layer fully-connected network and see if it performs well for you. If not, increase the number of layers or neurons.
Once you're satisfied with the results, it is time to deploy the trained Neural Network back to your board.
Once again, we'll use BLE to stream the predicted weight wirelessly to a PC. On the Arduino, run this snippet:
The ImpulseBuffer
is a data structure that holds an array where you can push new values. When the buffer is full, it shifts the old elements out to make room for the new ones. This way, you have an "infinite" buffer that mimics the windowing scheme of Edge Impulse.
To perform the prediction over the window of collected data, you only need to call impulse.regression(buffer.values)
and use the result as per your project needs.
In this example, we stream the value over BLE. In your own project, you could also use the value to control an actuator or raise an alarm when certain weights are detected.
To give you a real-world example on how to use this project, we'll pretend we have an LED display near the stretch-film machine where we want to see in real-time the predicted weight.
Since we're already streaming the data over BLE, we need a receiver device connected to the display. For the sake of the example, we'll use another Arduino BLE Sense.
On this device, run the following snippet:
This should then render the predicted weight on the 7-segment display.
This project aims to add machine learning to a traditional industrial machine, making it smarter and also adding predictive maintenance capabilities as well. Using only a microcontroller and an IMU, we were able to add weight estimation for pallets, and can identify when the rotational speed (force) of a motor is inconsistent with predicted values.
Setting up Spectral Analysis is fairly easy. In most of the cases we can rely on Edge Impulse Studio to chose the appropriate parameters by clicking the button:
To test the model against the Test dataset, we should go to the Model testing tab, and click the button. After a couple of seconds the classification results are shown:
Microcontrollers like the one found on the are powerful enough to run machine learning models with 3 dense layers, and 256 neurons in each layer. Further, this is acconplished with ultra low power consumption. We can utilise this board to capture data and perform classification locally with the help of Edge Impulse.
More information about how to connect a supported MCU board with Edge Impulse is .
To proceed with the IMU data collection using the Syntiant TinyML board and the Edge Impulse, you must flash the provided in . If it is your first time using the Syntiant TinyML board with the Edge Impulse, then I would recommend following from the beginning.
If using the , this part is very straightforward.
Use multi-modal audio classification and thermal imaging to identify anomalies or defects in HVAC cooling systems.
Created By: Kutluhan Aktar
Public Project Link: https://studio.edgeimpulse.com/public/418121/latest
GitHub Repository: https://github.com/KutluhanAktar/AI-driven-Sound-Thermal-Image-based-HVAC-Fault-Diagnosis/
One of the most prominent hurdles in operating manufacturing plants is to regulate the enervating heat produced by industrial processes. Therefore, an efficient industrial cooling system is the fulcrum of managing a profitable, sustainable, and robust industrial facility. There are various cooling system designs and structures to provide versatile heat regulation for different business requirements. For instance, natural draft cooling benefits the density discrepancy between the produced hot air and ambient fresh air, mechanical draft cooling utilizes sprayed hot water to transfer heat from a condenser to dry air, and water cooling uses cold water directly to reduce the targeted component temperature.
When all cooling requirements are considered, water cooling options are still the most popular and budget-friendly cooling systems applicable to various cooling scenarios, including but not limited to condominiums, office buildings, and industrial facilities. Water cooling systems, also known as hydronic cooling systems, are mainly considered as the most adaptable and advantageous HVAC (heating, ventilation, and air conditioning) systems utilizing water to transfer heat from one location to another[^1]. Since hydronic HVAC systems use water to absorb and transfer heat, they are more energy efficient as compared to air-based systems since water has a higher thermal capacity. According to the applied heat transfer method and water source, water-based cooling systems provide design flexibility with low-maintenance.
Nonetheless, despite the advantages of relying on water as a coolant, water-based HVAC systems still require regular inspection and maintenance to retain peak condition and avert pernicious cooling aberrations deteriorating heat regulation for industrial facilities, office buildings, or houses. Since water-based cooling equipment is a part of various demanding industrial applications[^2], including but not limited to chemicals or petrochemicals, welding, medical, pharmaceutical, automotive, data centers, and metalworking, maintaining consistent and reliable heat transfer is essential to sustain profitable business growth. Thus, to reduce production costs and increase manufacturing efficiency, mechanics should examine each cooling component painstakingly and regularly.
Since hydronic HVAC systems can be intricate and multifaceted depending on the application requirements, there are plentiful malfunctions that can affect cooling efficiency and heat transfer capacity, resulting in catastrophic production downtime for industrial processes. For instance, chillers using metal tubes (copper or carbon steel) to circulate water are susceptible to corrosion and abrasion, leading to leaks and component failures. Accumulating sediment or particulates in the complex tubing systems can corrode or clog pipes, leading to inadequate heat transfer. Or, perforce, neglected electronic components can degrade and fail due to prolonged wear and tear, leading to inconsistent cooling results. Unfortunately, these HVAC system malfunctions not only deteriorate industrial process sustainability but also engender hazardous environmental impacts due to high energy loss.
Water-based or not, an installed HVAC system accounts for up to 50% of the total energy consumption of an establishment, surpassing the total energy consumption of lighting, elevators, and office equipment[^3]. Thus, an unnoticed abnormality can multiply energy consumption while the HVAC system tries to compensate for the heat transfer loss. Furthermore, since HVAC systems are tightly coupled systems and operate with protracted lag and inertia, they are vulnerable even to minuscule abnormalities due to the ripple effect of a single equipment failure, whether a capacitor, pipe, or gasket.
Relevant data indicates that the amount of energy waste caused by a malfunctioning cooling system and faulty control accounts for about 15%–30% of the total energy consumption of studied facilities. Thus, by running a malfunctioning cooling system, buildings became profligate energy devourers, resulting in harsh energy production demands causing excess carbon and methane emitted into the atmosphere. Therefore, applying real-time (automated) malfunction diagnosis to HVAC systems can abate excessive energy consumption and improve energy efficiency leading to savings ranging from 5% to 30% [^3]. In addition to preventing energy loss, automated HVAC fault detection can extend equipment lifespan, avoid profit loss, and provide stable heat transfer during industrial processes. In that regard, automated malfunction detection also obviates exorbitant overhaul processes due to prolonged negligence, leading to a nosedive in production quality.
After perusing recent research papers on detecting component failures to automate HVAC maintenance, I noticed that there are no practical applications focusing on identifying component abnormalities of intricate water-based HVAC systems to diagnose consecutive thermal cooling malfunctions before instigating hazardous effects on both production quality and the environment. Hence, I decided to build a versatile multi-model AIoT device to detect anomalous sound emanating from cooling fans via a neural network model and to diagnose consecutive thermal cooling malfunctions based on specifically produced thermal images via a visual anomaly detection model. In addition to AI-driven features, I decided to develop a capable and feature-rich web application (dashboard) to improve user experience and make data transfer easier between development boards.
As I started to work on developing my AI-powered device features, I realized that no available open-source data sets were fulfilling the purpose of multi-model HVAC malfunction diagnosis. Thus, since I did not have the resources to collect data from an industrial-level HVAC system, I decided to build a simplified HVAC system simulating the required component failures for data collection and in-field model testing. I got heavily inspired by PC (computer) water cooling systems while designing my simplified HVAC system. Similar to a closed-loop PC water cooling design, I built my system by utilizing a water pump, plastic tubings, an aluminum radiator, and aluminum blocks. As for the coolant reservoir, I decided to design a custom one and print the parts with my 3D printer. Nonetheless, since I decided to produce a precise thermal image by scanning cooling components, I still needed an additional mechanism to move a thermal camera on the targeted components — aluminum blocks. Thus, I decided to design a fully 3D-printable CNC router with the thermal camera container head to position the thermal camera, providing an automatic homing sequence. My custom CNC router is controlled by Arduino Nano and consists of a 28BYJ-48 stepper motor, GT2 pulleys, a timing belt, and gear clamps. While producing thermal images and running the visual anomaly detection model, I simply added an aquarium heater to the closed-water loop in order to instantiate aluminum block cooling malfunctions.
As mentioned earlier, to provide full-fledged AIoT features with seamless integration and simplify complex data transfer procedures between development boards while constructing separate data sets and running multiple models, I decided to develop a versatile web application (dashboard) from scratch. To briefly summarize, the web dashboard can receive audio buffers via HTTP POST requests, save audio samples by given classes, communicate with the Particle Cloud to obtain variables or make Particle boards register them, produce thermal images from thermal imaging buffers to store image samples, and run the visual anomaly detection model on the generated thermal images. In the following tutorial, you can inspect all web dashboard features in detail.
Since this is a multi-model AI-oriented project, I needed to construct two different data sets and train two separate machine learning models in order to build a capable device. First, I focused on constructing a valid audio data set for detecting anomalous sound originating from cooling fans. Since XIAO ESP32C6 is a compact and high-performance IoT development board providing 512KB SRAM and 4 MB Flash, I decided to utilize XIAO ESP32C6 to collect audio samples and run my neural network model for anomalous sound detection. To generate fast and accurate audio samples (buffers), I decided to use a Fermion I2S MEMS microphone. Also, I connected an SSD1306 OLED display and four control buttons to program a feature-rich on-device user interface. After collecting an audio sample, XIAO ESP32C6 transfers it to the web dashboard for data collection. As mentioned earlier, I designed my custom CNC router based on Arduino Nano due to its operating voltage. To provide seamless device operations, XIAO ESP32C6 communicates with Arduino Nano to move the thermal camera container head.
After completing constructing my audio data set, I built my neural network model (Audio MFE) with Edge Impulse to detect sound-based cooling fan abnormalities. Audio MFE models employ a non-linear scale in the frequency domain, called Mel-scale, and perform well on audio data, mostly for non-voice recognition. Since Edge Impulse is nearly compatible with all microcontrollers and development boards, I have not encountered any issues while uploading and running my Audio MFE model on XIAO ESP32C6. As labels, I simply differentiated the collected audio samples by the cooling fan failure presence:
normal
defective
After training and testing my neural network model (Audio MFE), I deployed the model as an Arduino library and uploaded it to XIAO ESP32C6. Therefore, the device is capable of detecting anomalous sound emanating from the cooling fans by running the neural network model onboard without any additional procedures or latency.
Since I wanted to employ the secure and reliable Particle Cloud as a proxy to transfer thermal imaging (scan) buffers to the web dashboard, I decided to utilize Photon 2, which is a feature-packed IoT development board optimized for cloud prototyping. To collect accurate thermal imaging buffers, I employed an MLX90641 thermal imaging camera producing 16x12 IR arrays (buffers) with fully calibrated 110° FOV (field-of-view). Also, I connected an ST7735 TFT display and an analog joystick to program a secondary on-device user interface. Even though I managed to create a snapshot (preview) image from the collected thermal scan buffers, Photon 2 is not suitable for generating thermal images, saving image samples, and running a demanding visual anomaly detection model simultaneously due to memory limitations. Therefore, after registering the collected thermal scan buffers to the Particle Cloud, I utilized the web dashboard to obtain the registered buffers via the Particle Cloud API, produce thermal image samples, and run the visual anomaly detection model.
Considering the requirements of producing accurate thermal images and running a visual anomaly detection model, I decided to host my web application (dashboard) on a LattePanda Mu (x86 Compute Module). Combined with its Lite Carrier board, LattePanda Mu is a promising single-board computer featuring an Intel N100 quad-core processor with 64 GB onboard storage.
After completing constructing my thermal image data set, I built my visual anomaly detection model with Edge Impulse to diagnose ensuing thermal cooling malfunctions after applying anomalous sound detection to the water-based HVAC system. Since analyzing cooling anomalies based on thermal images of HVAC system components is a complicated task, I decided to employ an advanced and precise machine learning algorithm based on the GMM anomaly detection algorithm and FOMO. Supported by Edge Impulse Enterprise, FOMO-AD is an exceptional algorithm for detecting unanticipated defects by applying unsupervised learning techniques. Since Edge Impulse is nearly compatible with all microcontrollers and development boards, I have not encountered any issues while uploading and running my FOMO-AD model on LattePanda Mu. As labels, I utilized the default classes required by Edge Impulse to enable the F1 score calculation:
no anomaly
anomaly
After training and testing my FOMO-AD visual anomaly detection model, I deployed the model as a Linux (x86_64) application (.eim) and uploaded it to LattePanda Mu. Thus, the web dashboard is capable of diagnosing thermal cooling anomalies based on the specifically produced thermal images by running the visual anomaly detection model on the server (LattePanda Mu) without any additional procedures, reduced accuracy, or latency.
In addition to the discussed features, the web dashboard informs the user of the latest system log updates (completed operations) on the home (index) page automatically and sends an SMS to the verified phone number via Twilio so as to notify the user of the latest cooling status.
Considering the complex structure of this device based on a customized water-based HVAC system, I decided to design two unique PCBs after testing the prototype connections via breadboards. Since I wanted my PCB designs to represent the equilibrium of cooling fan failures and thermal (heat) malfunctions, I got inspired by two ancient rival Pokémon — Kyogre and Groudon.
Finally, in addition to the custom CNC router and coolant reservoir parts, I designed a plethora of complementary 3D parts, from unique PCB encasements to radiator mounts, so as to make the device as robust and compact as possible. To print flexible parts handling water pressure, I utilized a color-changing TPU filament.
So, this is my project in a nutshell 😃
Please refer to the following tutorial to inspect in-depth feature, design, and code explanations.
🚀🤖 Furthermore, you can check the brand-new ELECROW project community to gain insight into the manufacturing process of my PCB designs.
⭐ XIAO ESP32C6 | Inspect
⭐ Grove - MLX90641 Thermal Imaging Camera (16x12 IR Array w/ 110° FOV) | Inspect
⭐ Fermion: I2S MEMS Microphone | Inspect
⭐ LattePanda Mu | Inspect
⭐ Lite Carrier Board for LattePanda Mu | Inspect
Since this HVAC malfunction detection device performs various interconnected features between different development boards and the web application (dashboard), I needed to compartmentalize consecutive processes and describe functions under the same code file separately to provide comprehensive step-by-step instructions.
Thus, I highly recommend watching the demonstration videos before scrutinizing the tutorial steps to effortlessly grasp device capabilities that might look complicated in the instructions.
As my projects became more intricate due to complex designs and multiple development board integrations, I decided to create concise illustrations to improve my tutorials, visualize the special tasks associated with each development board, and delineate the complicated data transfer procedures between different boards or complementary applications.
Thus, before proceeding with the following steps, I highly recommend inspecting these illustrations to comprehend the device features and structure better.
Note: Since downsizing these high-resolution illustrations is necessary for loading the tutorial page, I noticed the text on the illustrations lost legibility. Therefore, I also added the original image files below for further inspection.
Before designing my simplified water-based HVAC system to simulate the required component failures for data collection and in-field model testing, I thoroughly inspected common water-cooled HVAC mechanisms[^4] to understand the inner workings of applying water as a coolant for transferring excess heat in industrial processes.
As I was developing device features, I noticed that I needed to run different data collection procedures and machine learning models simultaneously. Therefore, I decided to create two separate PCB designs to run the required tasks conclusively. Since I wanted my PCB designs to represent the equilibrium of cooling fan failures and thermal (heat) malfunctions, I got inspired by two ancient rival Pokémon — Kyogre and Groudon. Their legendary fights depict the epitome of the conflict between water cooling and exuberating heat :)
Before prototyping my Kyogre-inspired PCB design, I inspected the detailed pin reference of XIAO ESP32C6 and needed to prepare components requiring soldering for programming. Aside from the other components, I employed a soldering station to solder jumper wires to each leg of the micro switch in order to make it compatible with the custom switch connector on the CNC router, which will be explained in the following steps.
Then, I checked the wireless (Wi-Fi) and serial communication quality between XIAO ESP32C6, Arduino Nano, and the web dashboard (application) while transferring and receiving data packets. In the meantime, I also tested the torque capacity of the 28BYJ-48 stepper motor.
I designed my Kyogre-inspired PCB by utilizing Autodesk Fusion 360 and KiCad in tandem. Since I wanted to design a unique 3D-printed encasement to simplify the PCB integration to the special mounts (also 3D-printed) of the aluminum water cooling radiator, I created the PCB outline (edge) on Fusion 360 and then imported the outline file (DXF) to KiCad. In this regard, I was able to design custom 3D parts compatible with the PCB outline precisely.
To replicate this malfunction detection device for water-cooled HVAC systems, you can download the Gerber file below or order the discussed PCB design directly from my ELECROW community page.
By utilizing a TS100 soldering iron, I attached the component list depicted below.
📌 Component list of the Kyogre PCB:
L_1, L_2 (Headers for XIAO ESP32C6)
A1 (Headers for Arduino Nano)
Mic1 (Fermion: I2S MEMS Microphone)
SSD1306 (Headers for SSD1306 OLED Display)
L1 (Headers for Bi-Directional Logic Level Converter)
SW1 (Micro Switch (JL024-2-026))
ULN2003 (Headers for 28BYJ-48 Stepper Motor)
R1 (20K Resistor)
R2 (220Ω Resistor)
C1, C2, C3, C4, K1 (6x6 Pushbutton)
D1 (5 mm Common Anode RGB LED)
J2 (Headers for Additional Stepper Motor Power Supply)
J1 (Power Jack)
Since some components were tricky to solder due to the unique structure of the Kyogre PCB, I utilized the soldering station to hold the problematic parts.
After concluding soldering all components, I tested whether the Kyogre PCB operated as expected or was susceptible to electrical issues.
Before prototyping my Groudon-inspired PCB design, I inspected the detailed pin reference of Particle Photon 2 and needed to prepare components requiring soldering for programming.
Then, I checked the wireless (Wi-Fi) and cloud communication quality between Photon 2, the Particle Cloud, and the web dashboard (application) while transferring and receiving data packets.
I designed my Groudon-inspired PCB by utilizing Autodesk Fusion 360 and KiCad in tandem. Since I wanted to design a unique 3D-printed encasement to simplify the PCB integration to the custom CNC router (also 3D-printed) moving the thermal camera container head, I created the PCB outline (edge) on Fusion 360 and then imported the outline file (DXF) to KiCad. In this regard, I was able to design custom 3D parts compatible with the PCB outline precisely.
To replicate this malfunction detection device for water-cooled HVAC systems, you can download the Gerber file below or order the discussed PCB design directly from my ELECROW community page.
By utilizing a TS100 soldering iron, I attached the component list depicted below.
📌 Component list of the Groudon PCB:
Photon2 (Headers for Particle Photon 2)
MLX90641 (Headers for MLX90641 Thermal Imaging Camera)
ST7735 (Headers for ST7735 1.8" TFT Display)
U1 (COM-09032 Analog Joystick)
K1 (6x6 Pushbutton)
D1 (5 mm Common Anode RGB LED)
J1 (Power Jack)
Since some components were tricky to solder due to the unique structure of the Groudon PCB, I utilized the soldering station to hold the problematic parts.
After concluding soldering all components, I tested whether the Groudon PCB operated as expected or was susceptible to electrical issues.
Since I focused on building a versatile and accessible AI-driven device that identifies the faulty cooling components via anomalous sound detection and diagnoses ensuing thermal cooling malfunctions via visual anomaly detection based on thermal images, I decided to design complementary 3D-printable parts that improve the robustness, compatibility, and capabilities of the device considering harsh operating conditions of industrial plants.
First, I wanted to fix the large aluminum radiator position and integrate the Kyogre PCB as close as possible to the radiator. Thus, I designed these parts:
the main body of the right radiator mount,
the main body of the left radiator mount,
two tilted snap-fit joints perfectly sized for the radiator,
four special legs (back and front) supporting the radiator mounts,
the unique PCB encasement derived from the Kyogre PCB outline,
the PCB encasement connector providing a buckle-shaped joint interlocking with the right radiator mount.
Furthermore, I decided to emboss the Seeed logo on the main body of the left radiator mount to highlight the qualifications of this segment of the AI-powered HVAC malfunction detection device.
I utilized Autodesk Fusion 360 to model all of the mentioned 3D-printable parts and test their clearances to print flawless joints. For further examination, you can download their STL files below.
After designing 3D models and exporting them as STL files, I sliced the exported models in PrusaSlicer, which provides lots of groundbreaking features such as paint-on supports and height range modifiers.
Since I wanted to apply a unique industrial theme representing vivid industrial processes, I utilized this PLA filament:
ePLA-Matte Tangerine
Finally, I printed all of the mentioned models with my Anycubic Kobra 2 3D Printer.
After printing all 3D models related to the aluminum radiator, I started to combine the radiator mount parts via M3 screws through the assembly-ready screw holes.
Then, I fastened the unique Kyogre PCB encasement to the complementary PCB connector via M3 screws. Since the PCB connector is compatible with the right radiator mount via its buckle-shaped snap-fit joint, I was able to interlock the PCB connector with the right mount body effortlessly.
Although I applied hot glue between parts while affixing them via M3 screws, it was still not enough to build a production-ready device, especially considering the harsh operating conditions of industrial HVAC systems. Thus, I employed a well-known injection molding technique to make the connections more sturdy. In this technique, a heat press mechanism is generally utilized to add threaded brass inserts between 3D-printed parts to connect them firmly. In my version, I simply used a soldering iron to embed M3 screws directly into the assembly-ready holes instead of threaded inserts to fasten the parts together.
As discussed earlier, I employed the soldering iron to embed M3 screws directly into the assembly-ready holes to affix parts tightly.
After combining all the parts, I placed the aluminum radiator on the radiator mounts via their bracket-shaped snap-fit joints in order to test the strength of the mounts while carrying the radiator in a tilted position.
After modeling the 3D parts related to the aluminum radiator, I focused on designing a custom CNC router to move the thermal imaging camera to collect thermal scan (imaging) buffers from the predefined locations on the aluminum cooling blocks to produce an accurate thermal image. Also, I wanted to integrate the Groudon PCB as close as possible to the CNC router since the MLX90641 thermal imaging camera must be connected to Photon 2. Thus, I designed these parts:
two chamfered CNC rods,
the micro switch connector,
two special pins for attaching GT2 20T pulleys,
the left CNC stand providing slots for the CNC rods, the 28BYJ-48 stepper motor, the ULN2003 driver board, and the micro switch connector,
the right CNC stand providing slots for the CNC rods and the GT2 20T pulley pins,
the thermal camera container head providing holes to pass CNC rods and slots for the MLX90641 thermal imaging camera, GT2 timing belt, and aluminum gear clamps,
the unique PCB encasement derived from the Groudon PCB outline,
the PCB encasement connector providing a buckle-shaped joint interlocking with the right CNC stand while preventing any contact with the embedded GT2 20T pulley pins.
Furthermore, I decided to emboss the Elecrow logo and the Edge Impulse logo on the left and right CNC stands respectively to highlight the qualifications of this segment of the AI-powered HVAC malfunction detection device.
I utilized Autodesk Fusion 360 to model all of the mentioned 3D-printable parts and test their clearances to print flawless joints. For further examination, you can download their STL files below.
After designing 3D models and exporting them as STL files, I sliced the exported models in PrusaSlicer, which provides lots of groundbreaking features such as paint-on supports and height range modifiers.
Since I wanted to apply a unique industrial theme representing vivid industrial processes, I utilized this PLA filament contrasting with the previous filament color:
ePLA-Matte Morandi Purple
Finally, I printed all of the mentioned models with my Anycubic Kobra 2 3D Printer.
After printing all 3D models related to the custom CNC router, I started to combine the CNC parts via M3 screws through the assembly-ready screw holes and the provided slots for the associated parts.
Then, I fastened the unique Groudon PCB encasement to the complementary PCB connector via M3 screws. Since the PCB connector is compatible with the right CNC stand via its buckle-shaped snap-fit joint and avoids any contact with the GT2 20T pulley pins, I was able to interlock the PCB connector with the right CNC stand effortlessly.
Although I applied hot glue between parts while affixing them via M3 screws, it was still not enough to build a production-ready device, especially for a constantly moving CNC router. Thus, I employed a well-known injection molding technique to make the connections more sturdy. In this technique, a heat press mechanism is generally utilized to add threaded brass inserts between 3D-printed parts to connect them firmly. In my version, I simply used a soldering iron to embed M3 screws directly into the assembly-ready holes instead of threaded inserts to fasten the parts together.
As discussed earlier, I employed the soldering iron to embed M3 screws directly into the assembly-ready holes to affix parts tightly.
For the parts with provided slots, I utilized the hot glue gun to reinforce the connections.
Before finalizing all slot connections via the hot glue gun, I started to work on building the positioning mechanism of the CNC router by integrating these mechanical components into their corresponding slots:
a 28BYJ-48 stepper motor,
a ULN2003 driver board,
a GT2 60T pulley attached to the stepper motor,
two GT2 20T pulleys attached to the special pulley pins,
GT2 6 mm timing belt,
two GT2 aluminum gear clamps.
After affixing the timing belt via the gear clamps, I utilized two M3 screws to adjust the tightness of the timing belt.
After modeling the 3D parts related to the custom CNC router, I realized that my overall design was still lacking some of the features I wanted to implement to build an industrial-level HVAC malfunction detection device, such as an impervious custom reservoir for the simplified water cooling system. Thus, I designed these additional parts:
an aluminum cooling block holder allowing plastic tubing adjustment,
an impermeable water reservoir compatible with the water cooling pump,
a removable top cover for the reservoir with built-in plastic tubing fittings — IN and OUT,
a custom case and a removable top cover for LattePanda Mu with the Lite Carrier board.
Furthermore, I decided to emboss the DFRobot logo and the project name on the top cover of the LattePanda Mu case to emphasize the qualifications of this segment of the AI-powered HVAC malfunction detection device.
I utilized Autodesk Fusion 360 to model all of the mentioned 3D-printable parts and test their clearances to print flawless joints. For further examination, you can download their STL files below.
After designing 3D models and exporting them as STL files, I sliced the exported models in PrusaSlicer, which provides lots of groundbreaking features such as paint-on supports and height range modifiers.
Since I wanted to print pliable parts unsusceptible to water pressure and enclosing the Lite Carrier board perfectly, I utilized this TPU (flexible) filament:
eTPU-95A Color Change by Temp
Thanks to this TPU filament's temperature-based color-changing ability, I was able to observe the current water temperature effortlessly while simulating thermal cooling malfunctions.
Finally, I printed all of the mentioned models with my Anycubic Kobra 2 3D Printer.
After printing all 3D models related to the additional features, I started to combine the components with their associated parts.
First, I installed the special heatsink, providing thermal paste, on LattePanda Mu and attached LattePanda Mu to the Lite Carrier board via the built-in connector (slot).
Since the Lite Carrier board does not support Wi-Fi connection out of the box, I connected an AC8265 wireless NIC module (WLAN expansion card) via the built-in M.2 E Key (2230).
Since the water reservoir does not need assembly, I simply placed its removable top cover. Then, I fastened the aluminum cooling blocks to their holders via the hot glue gun. Since the LattePanda Mu case is printed with a flexible filament, I was able to place the Lite Carrier board into the case effortlessly.
As discussed earlier, I needed to build a simplified water-based HVAC system to construct data sets fulfilling the purpose of multi-model HVAC malfunction diagnosis and to conduct in-field model testing. Since I got heavily inspired by PC (computer) water cooling systems, I built my simplified system by utilizing these water cooling components, reminiscent of a closed-loop PC water cooling design:
an aluminum water cooling radiator,
two aluminum water cooling blocks (40 x 80 mm),
a water cooling pump (4.8 W - 240 L/H),
10 mm plastic tubing (hose),
three 120 mm case fans (RGB) compatible with the radiator.
As mentioned, I decided to model a 3D-printable water reservoir, including a removable top cover with built-in plastic tubing fittings — IN and OUT.
After concluding assembling all of the 3D-printed parts, I started to build the simplified water-based HVAC system.
Water Pump OUT ➜ Radiator IN ➜ Radiator OUT ➜ First Aluminum Block IN ➜ First Aluminum Block OUT ➜ Second Aluminum Block IN ➜ Second Aluminum Block OUT ➜ Custom Water Reservoir IN
After completing the simplified closed-loop water cooling system, I started to work on combining PCBs, 3D parts, and the remaining components.
After concluding all of the mentioned assembly stages, I started to conduct experiments to simulate and detect HVAC system cooling malfunctions.
Since I decided to inform the user of the latest diagnosed cooling malfunctions via SMS after running the Audio MFE and visual anomaly detection models consecutively, I decided to utilize Twilio's SMS API. In this regard, I was also able to transfer the prediction date and the modified resulting image name for further inspection through the web dashboard (application).
Twilio provides a trial text messaging service to transfer an SMS from a virtual phone number to a verified phone number internationally. Also, Twilio supports official helper libraries for different programming languages, including PHP, enforcing its suite of APIs.
I noticed that creating free subsidiary accounts (projects) more than once may lead to the permanent suspension of a Twilio user account. So, I recommend using the default trial account or a previously created account if you have multiple iterations or did not subscribe to a paid plan.
Since Twilio provides a free 10DLC virtual phone number for each trial account, Twilio allows the user to utilize the text messaging service immediately after activating the given virtual phone number.
Before starting to develop the web dashboard (application), I needed to configure the required software and Python modules on LattePanda Mu to be able to host the web dashboard, produce thermal images for data collection, and run the FOMO-AD visual anomaly detection model.
Since the web dashboard heavily relies on Python modules, especially for running the FOMO-AD model via the Edge Impulse Linux Python SDK, I set up Ubuntu as the operating system for LattePanda Mu. As I was working on this device, Ubuntu 22.04 was officially supported by LattePanda Mu. You can inspect the prioritized operating system versions here.
Plausibly, the XAMPP application provides an official Linux installer. So, creating a local server with a MariaDB database to host the web dashboard (application) on LattePanda Mu becomes straightforward and effortless.
sudo chmod 755 /home/kutluhan/Downloads/xampp-linux-x64-8.2.12-0-installer.run
sudo /home/kutluhan/Downloads/xampp-linux-x64-8.2.12-0-installer.run
sudo /opt/lampp/manager-linux-x64.run
After installing and setting up the XAMPP application (lampp) on LattePanda Mu, I needed to configure some settings to make the web dashboard (application) access the terminal and execute Python scripts.
sudo chmod -R 777 /opt/lampp/htdocs/HVAC_malfunction_diagnosis_dashboard
However, even after changing the permissions, the web application cannot access the terminal and utilize the sudo command required to execute necessary Python scripts with the root user (super-user) privileges.
Although assigning super-user privileges to different users is a security risk, I decided to give the web application the ability to access the terminal with root user privileges. In this case, it was applicable since the XAMPP application is only operating as a local development environment.
sudo visudo
After configuring the required permissions and privileges for the web application, I needed to install the necessary Python modules.
sudo apt-get install python3-opencv
sudo pip3 install edge_impulse_linux
sudo apt-get install python3-pyaudio
As discussed earlier, I decided to develop a versatile web dashboard (application) to improve the user experience and run essential device features, including but not limited to executing Python scripts.
Since the web application features interconnect with data collection and model running procedures executed by different development boards, please refer to the web application code files or the following steps focusing on the device qualifications to review all of the web application capabilities thoroughly.
As shown below, the web application consists of seven folders and nine code files in various programming languages:
/assets
class.php
dashboard_updates.php
index.css
index.js
Particle_cloud_connection.php
/generate_thermal_img
/img_detection
/img_sample
generate_thermal_image_and_run_model.py
/model
/sample_audio_files
/files
convert_raw_to_wav.py
save_audio_sample.php
index.php
📁 class.php
To bundle all functions under a specific structure, I created a class named dashboard. Please refer to the following steps to inspect all interconnected device features.
⭐ Define the required configurations to communicate with Photon 2 via the Particle Device Cloud API.
⭐ In the init function:
⭐ Define the Twilio account credentials and required settings.
⭐ In the append_log_update function:
⭐ Insert a new system log update regarding data collection or model inference results into the system_log MariaDB database table.
⭐ In the optain_modify_log_updates function:
⭐ Fetch all system log updates registered on the system_log database table.
⭐ According to the given log category, modify the obtained information to generate HTML elements for each system log update.
⭐ While generating HTML elements for the retrieved log updates, append each HTML element to an array so as to create a thorough index.
⭐ Finally, return the produced HTML element index (list).
⭐ If there is no registered system log update in the database table, return the default HTML element index.
⭐ In the particle_register_parameter function:
⭐ Define the authorization configurations and cloud function arguments (POST data parameters) required by the Particle Cloud API.
⭐ By making a cURL call (POST request), employ the Particle Cloud API to make Photon 2 collect a thermal scan (imaging) buffer and register the collected buffer to the Particle Cloud.
⭐ In the particle_obtain_parameter function:
⭐ By making a cURL call (GET request), employ the Particle Cloud API to obtain information regarding the passed Cloud variable registered by Photon 2.
⭐ If the Cloud response is successful, decode the received JSON data packet to fetch the given Cloud variable value. Then, return the obtained value.
⭐ In the particle_generate_thermal_image_from_buffers function:
⭐ Obtain all thermal scan (imaging) buffers registered by Photon 2 individually from the Particle Cloud.
⭐ Then, generate a precise thermal image from the fetched buffers by executing a Python script — generate_thermal_image_and_run_model.py.
⭐ According to the passed process type, save the produced image as a sample directly or run an inference with the Edge Impulse FOMO-AD model via the same Python script.
⭐ Finally, return the response transferred by the executed Python script.
Since the web application executes the given Python script via the shell_exec function, it is not possible to observe debugging errors like using the terminal. Thus, I appended 2>&1 to the command line in the shell_exec function to display debugging errors on the browser directly. In this regard, I was able to develop the web application way faster.
⭐ In the Twilio_send_SMS function:
⭐ Via the Twilio SMS API, send an SMS from the Twilio virtual phone number to the registered (user) phone number to transfer the given text message.
⭐ Define the required MariaDB database configurations for LattePanda Mu.
📁 Particle_cloud_connection.php
⭐ Include the class.php file and define the dashboard object of the dashboard class.
⭐ If requested via HTTP GET request, communicate with the Particle Cloud to obtain the value of the passed Cloud variable (individually) registered by Photon 2 and return the fetched value.
⭐ If requested via HTTP GET request, communicate with the Particle Cloud in order to make Photon 2 collect a thermal imaging buffer and register the collected buffer to the passed Cloud variable.
⭐ If requested via HTTP GET request:
⭐ Communicate with the Particle Cloud to obtain all thermal imaging buffers registered by Photon 2.
⭐ Generate a thermal image from the obtained buffers by executing a Python script — generate_thermal_image_and_run_model.py.
⭐ According to the passed process type (sample or detection), save the generated image as a sample or run an inference with the Edge Impulse FOMO-AD (visual anomaly detection) model via the same Python script.
⭐ Then, decode the response generated by the Python script to obtain the image tag (default sample or detected label) and the creation date.
⭐ After producing the thermal image and conducting the given process type successfully, update the system log on the MariaDB database accordingly.
⭐ Finally, depending on the process type, send an SMS via Twilio to inform the user of the latest system log update regarding cooling status.
📁 dashboard_updates.php
⭐ Include the class.php file and define the dashboard object of the dashboard class.
⭐ If requested via HTTP GET request:
⭐ Retrieve all of the system log updates on the MariaDB database table — system_log.
⭐ According to the given log category, modify the obtained information to generate HTML elements for each system log update.
⭐ Then, create a JSON object from the produced HTML element index (list).
⭐ Finally, return the recently generated JSON object.
📁 save_audio_sample.php
⭐ Include the class.php file and define the dashboard object of the dashboard class.
⭐ Define the text file name for the received raw audio buffer (I2S).
⭐ If XIAO ESP32C6 transfers the selected audio class name via a GET (URL) parameter, modify the text file name accordingly.
⭐ If XIAO ESP32C6 transfers the collected raw audio buffer (I2S) via an HTTP POST request:
⭐ Save the received audio buffer to the defined text (TXT) file.
⭐ Then, convert the recently saved raw audio buffer (TXT file) to a WAV audio file by executing a Python script — convert_raw_to_wav.py.
⭐ As executing the Python script, transmit the required audio conversion parameters for the Fermion I2S MEMS microphone as Python Arguments.
⭐ After generating the WAV audio file from the raw audio buffer, remove the converted text file from the server.
⭐ After completing the audio conversion process successfully, update the system log on the MariaDB database accordingly.
Since the web application executes the given Python script via the shell_exec function, it is not possible to observe debugging errors like using the terminal. Thus, I appended 2>&1 to the command line in the shell_exec function to display debugging errors on the browser directly. In this regard, I was able to develop the web application way faster.
📁 index.js
⭐ Utilizing the setInterval function, every 5 seconds, make an HTTP GET request to the dashboard_updates.php file to:
⭐ Retrieve the HTML element index (list) as a JSON object generated from the system log updates registered on the MariaDB database table.
⭐ Decode the obtained JSON object.
⭐ Pass the fetched HTML elements (sections) to the web dashboard home (index) page automatically.
⭐ According to the given display category option, show the associated elements only on the index page.
⭐ According to the clicked horizontal menu button, change the display category option and the clicked button's appearance by toggling classes.
📁 You can inspect index.php and index.css files below, which are for designing the web dashboard home (index) page.
As explained earlier, I needed to convert the raw audio buffers transferred by XIAO ESP32C6 to WAV audio files in order to save compatible audio samples for Edge Impulse. Therefore, I programmed a simple Python script to perform the audio conversion process.
Since Python scripts can obtain parameters as Python Arguments from the terminal (shell) directly, the web dashboard (application) passes the required audio conversion variables effortlessly.
📁 convert_raw_to_wav.py
⭐ Include the required modules.
⭐ Obtain and decode audio conversion parameters transferred by the web dashboard as Python Arguments.
⭐ Get all text (.txt) files consisting of raw audio buffers (I2S) transferred by XIAO ESP32C6.
⭐ Then, open each text file to convert the stored raw audio buffers to WAV audio files and save the produced WAV audio samples to the files folder.
As discussed earlier, Photon 2 is not suitable for generating thermal images, saving image samples, and running a demanding visual anomaly detection model simultaneously due to memory limitations. Therefore, I utilized the web dashboard to obtain the thermal scan (imaging) buffers registered on the Particle Cloud and programmed a Python script to perform the mentioned processes.
Since Python scripts can obtain parameters as Python Arguments from the terminal (shell) directly, the web dashboard (application) passes the obtained thermal imaging buffers and the given process type effortlessly.
📁 generate_thermal_image_and_run_model.py
To bundle all functions under a specific structure, I created a class named thermal_img. Please refer to the following steps to inspect all interconnected device features.
⭐ Include the required modules.
⭐ In the init function:
⭐ Get the absolute folder path to avoid errors while running this script via the web dashboard (application).
⭐ Define the required configurations to run the Edge Impulse FOMO-AD visual anomaly detection model converted to a Linux (x86_64) application (.eim).
⭐ Define the required variables to generate a thermal image from the given thermal scan (imaging) buffers, including the template (blank) image.
⭐ In the generate_thermal_img function:
⭐ Open and read the template (blank) image (192 x 192) via the built-in OpenCV function — imread.
⭐ Since the MLX90641 thermal imaging camera produces 16x12 IR arrays (buffers), I decided to set the pixel width as six (6) and the pixel height as eight (8) to fill the template image completely with four sequential buffers.
⭐ For each passed thermal imaging buffer ((16x12) x 4):
⭐ Define the coordinates for the first pixel.
⭐ Starting with the first pixel, draw each individual data point with the color indicator on the template image to generate a precise thermal image, estimated by the specific color algorithm based on the temperature ranges defined on Photon 2.
⭐ Note: Indicators are defined in the BGR format.
⭐ After drawing a pixel successfully, update the successive data point coordinates.
⭐ After generating the thermal image from the given buffers, store the modified template frame before saving an image file.
⭐ In the save_thermal_img function:
⭐ Depending on the passed process type (sample or detection), save the stored thermal image frame as a sample to the img_sample folder directly or save the modified model resulting image (after running the FOMO-AD model) to the img_detection folder.
⭐ Print the passed image tag (sample or the detected label) with the creation (or prediction) date as the response to the web dashboard.
⭐ In the run_inference function:
⭐ Print the provided information of the Edge Impulse FOMO-AD visual anomaly detection model.
⭐ Get the latest stored thermal image (frame).
⭐ After obtaining the latest thermal image, resize the retrieved frame if necessary and generate features from the cropped frame depending on the given model characteristics.
⭐ Run an inference.
⭐ Since the Edge Impulse FOMO-AD model categorizes a passed image by individual cells (grids) based on the dichotomy between two predefined classes (anomaly and no anomaly), utilize the mean visual anomaly value to detect overall (high-risk) thermal cooling malfunctions based on the confidence threshold estimated while testing the model accuracy on Edge Impulse.
⭐ If the calculated mean visual anomaly value is higher than the given threshold:
⭐ Obtain the visual anomaly grid produced by the FOMO-AD model, consisting of individual cells with coordinates, assigned labels, and anomaly scores.
⭐ If a cell's assigned label is anomaly and its anomaly score is higher than the given threshold:
⭐ Draw a rectangle on the model resulting image (cropped) with the provided cell coordinates.
⭐ Calculate the cell's anomaly intensity level — Low (L), Moderate (M), High (H) — in relation to the given threshold.
⭐ Then, draw the evaluated anomaly intensity level to the top-left corner of the cell rectangle.
⭐ Save the model resulting image modified with the cell rectangles and their evaluated anomaly intensity levels.
⭐ Finally, stop the running inference.
⭐ Define the thermal_img object of the thermal_img class and pass the path of the FOMO-AD model (Linux (x86_64) application) on the server.
⭐ Obtain and decode thermal scan (imaging) buffers and the process type transferred by the web dashboard as Python Arguments.
⭐ After obtaining the required parameters, generate a precise thermal image from the passed thermal scan (imaging) buffers.
⭐ Depending on the passed process type (sample or detection), run an inference with the Edge Impulse FOMO-AD visual anomaly detection model to diagnose thermal cooling malfunctions or save the produced thermal image directly as a sample.
Since LattePanda Mu is a budget-friendly compute module providing consistent multitasking performance thanks to Intel N100 quad-core processor and 8GB LPDDR5 memory, I decided to host the web application on LattePanda Mu combined with its Lite Carrier board.
http://localhost/phpmyadmin/
⚠️🔊♨️🖼️ After running the web dashboard for the first time, the home (index) page waits for obtaining the latest system log updates registered on the MariaDB database table.
⚠️🔊♨️🖼️ If there is no registered system log update in the database table, the index page displays the default placeholders to notify the user.
Although XIAO ESP32C6 is a production-ready and compact IoT development board, before proceeding with the following steps, I needed to set XIAO ESP32C6 on the Arduino IDE, install the required libraries, and configure some default settings.
When I was setting up XIAO ESP32C6 on the Arduino IDE, the current stable release of the Arduino-ESP32 board package (2.0.15) did not support the ESP32-C6 chipset. Therefore, I utilized the latest development release (3.0.0-rc1).
https://espressif.github.io/arduino-esp32/package_esp32_dev_index.json
Adafruit_SSD1306 | Download
Adafruit-GFX-Library | Download
Even though C++ is available for programming Particle development products, the Arduino IDE is not suitable due to the additional requirements for the Particle Device OS. Fortunately, Particle officially supports Visual Studio Code (VSCode) and provides the Particle Workbench, which is an integrated development and debugging environment. Since the Particle Workbench capitalizes on the built-in IntelliSense features of VSCode, it makes programming Photon 2 straightforward and effortless.
After creating a new project successfully on VSCode, I decided to utilize the Particle web-based setup wizard to configure the required settings for the Particle Cloud easily, providing step-by-step instructions.
https://setup.particle.io/
Even though Particle supports Arduino libraries, integrating them into the VSCode Workbench extension is not a simple copy-paste process.
The Particle development environment requires the following file structure to compile a library. The src folder must contain all of the essential library files (.cpp and .h).
MyLibrary/
examples/
usage/
usage.ino
src/
MyLibrary.cpp
MyLibrary.h
library.properties
README.md
LICENSE
Thus, we need to modify the file structure of an existing Arduino library if it is not compatible with that of Particle.
Nevertheless, Particle provides a plethora of production-ready Arduino libraries maintained by the Particle community. Thus, adding officially supported Arduino libraries to the Workbench extension is uncomplicated.
Following the discussed steps, I installed these libraries from the Particle libraries ecosystem:
Adafruit_GFX_RK | Inspect
Adafruit_ST7735_RK | Inspect
After installing the supported libraries, I modified the remaining Arduino libraries required for the components connected to Photon 2:
Seeed_Arduino_MLX9064x | Inspect
You can download the Arduino libraries I modified for the Particle development environment below.
After completing setting up libraries, I tested the connection quality between Photon 2 and the Particle Cloud by utilizing the provided cloud transmission methods — Particle.variable() and Particle.function().
After ensuring consistent cloud data transmission, I needed to generate a user access token to make the web application (dashboard) employ the Particle Device Cloud API to communicate with the Particle Cloud.
Despite the fact that the Particle CLI lets the user generate access tokens, you can also create a token using the official web-based token generation tool on the browser.
I followed the exact same process to display images on the SSD1306 OLED screen (XIAO ESP32C6) and the ST7735 TFT display (Photon 2).
⭐ In the logo.h file, I defined multi-dimensional arrays to group the assigned logos and their sizes — width and height.
After setting up all development boards on their associated software, I started to work on improving and refining code to perform functions flawlessly. First, I focused on programming XIAO ESP32C6, which manages audio sample collection and data transmission to the web application.
As explained in the previous steps, the device performs lots of interconnected features between different development boards and the web application for data collection and running advanced AI models. Thus, the described code snippets show the different aspects of the same code file. Please refer to the code files or the demonstration videos to inspect all interconnected functions in detail.
📁 HVAC_fault_diagnosis_anomalous_sound.ino
⭐ Include the required libraries.
⭐ Add the icons to be shown on the SSD1306 OLED display, which are saved and grouped in the logo.h file.
⭐ Define the required server configurations for the web application hosted on LattePanda Mu.
⭐ Then, initialize the WiFiClient object.
⭐ Define the Fermion I2S MEMS microphone pin configurations, audio sample bits, and the I2S processor port.
⭐ Configure the SSD1306 screen settings.
⭐ In the i2s_install function, configure the I2S processor port with the passed sampling rate and set the channel format as ONLY_RIGHT.
⭐ In the i2s_setpin function, assign the given I2S microphone pin configurations to the defined I2S port via the built-in I2S driver.
⭐ Wait until XIAO ESP32C6 establishes a successful connection with the given Wi-Fi network.
⭐ According to the pressed control button (A or C), adjust the highlighted menu option number by one — -1 (UP) or +1 (DOWN).
⭐ In the show_interface function: ⭐ According to the passed screen command and menu option number, get the assigned icon information, show the home screen with the highlighted menu option, or display the associated layout after the highlighted menu option is selected.
⭐ Depending on the status of the CNC positioning process (Waiting, Ongoing, Saved, or Image Ready), display the associated buffer operation status indicator on the screen for each positioning point (location).
⭐ Show the associated class icon and name according to the audio class predicted by the Audio MFE model.
⭐ In the microphone_sample function:
⭐ Obtain the information generated by the I2S microphone and save it to the input buffer — sample_audio_buffer.
⭐ If the I2S microphone generates raw audio data successfully, scale the produced raw audio buffer depending on the model requirements. Otherwise, the sound might be too quiet for classification.
⭐ If requested for debugging, display the average (mean) output values on the serial plotter.
⭐ In the make_a_post_request function:
⭐ Connect to the web application with the configured server settings.
⭐ Create the query string by appending the passed URL query (GET) parameters.
⭐ Define the AudioSample boundary parameter to transfer the produced raw audio sample to the web application as a plain text file.
⭐ Estimate the total message (content) length.
⭐ Initiate an HTTP POST request with the created query string as additional URL parameters to the web application.
⭐ While making the POST request, according to the defined buffer multiplier, collect and write (transfer) raw audio buffers consecutively to prevent memory allocation issues.
⭐ Then, conclude data (buffer) writing and the POST request.
⭐ Wait until fully transferring the raw audio sample produced from individual buffers.
⭐ After highlighting a menu option on the home screen, if the control button B is pressed, navigate to the selected option's layout.
⭐ If the first option (Collect Audio) is activated:
⭐ Inform the user of the audio sample collection settings on the SSD1306 screen.
⭐ According to the pressed control button (A or C), select an audio class for the sample.
A ➜ normal
C ➜ defective
⭐ Before producing an audio sample, check the I2S microphone status by running the microphone_sample function once.
⭐ If the I2S microphone generates a raw audio buffer as expected, notify the user on the screen.
⭐ Then, collect raw audio buffers and transfer them simultaneously to the web application until reaching the predefined buffer multiplier number in order to send the produced audio sample without triggering memory allocation errors.
⭐ Notify the user of the web application data transmission success on the screen by showing the associated status icons.
⭐ If the control button D is pressed, redirect the user to the home screen.
⚠️🔊♨️🖼️ If XIAO ESP32C6 establishes a successful connection with the given Wi-Fi network and all connected components operate as expected, the device shows the home screen on the SSD1306 OLED display.
Collect Audio
Faulty Sound
CNC Positioning & Thermal Buffer Collection
⚠️🔊♨️🖼️ The device lets the user adjust the highlighted menu option on the home screen by pressing the control buttons — A (↑) and C (↓).
⚠️🔊♨️🖼️ After changing the highlighted menu option, the device also updates the icon on the home screen with the assigned option icon.
⚠️🔊♨️🖼️ As a menu option is highlighted, if the control button B is pressed, the device navigates to the selected option's layout.
⚠️🔊♨️🖼️ Note: If the user presses the control button D, XIAO ESP32C6 returns to the home screen.
⚠️🔊♨️🖼️ If the user activates the first menu option — Collect Audio:
⚠️🔊♨️🖼️ On the associated layout, the device informs the user of the audio sample collection settings, which allow the user to assign an audio class to the sample by pressing a control button.
A ➜ normal
C ➜ defective
⚠️🔊♨️🖼️ After selecting an audio class, the device checks whether the I2S microphone operates accurately and informs the user regarding the microphone status on the screen before producing an audio sample.
⚠️🔊♨️🖼️ If the I2S microphone works as expected, the device collects raw audio buffers and transfers them simultaneously to the web application until reaching the predefined buffer multiplier number while maintaining an HTTP POST request.
⚠️🔊♨️🖼️ In this regard, the device can produce and send long raw audio samples to the web application without triggering memory allocation issues.
⚠️🔊♨️🖼️ After concluding the POST request, the device notifies the user of the data transmission success on the screen by showing the associated status icons.
⚠️🔊♨️🖼️ As explained in the previous steps, after receiving the produced raw audio sample, the web application saves the sample as a plain text file temporarily.
⚠️🔊♨️🖼️ Then, the web application runs a Python script to convert the raw audio sample to a WAV audio file compatible with Edge Impulse.
⚠️🔊♨️🖼️ After converting the given sample with the passed audio conversion parameters successfully, the web application updates the system log on the MariaDB database accordingly.
⚠️🔊♨️🖼️ Finally, the web application updates its home (index) page automatically to showcase the latest system log entries. In addition to displaying the collection dates and the assigned audio classes, the web application lets the user download audio samples individually on the home page.
After collecting samples of normal and defective sound originating from the HVAC system cooling fans, I managed to construct a valid audio data set stored on the web application.
Since I decided to build a fully 3D-printable custom CNC router to position the MLX90641 thermal imaging camera, I needed to design a separate CNC control mechanism based on Arduino Nano. In this regard, I was able to move the thermal camera container head according to the CNC commands received via serial communication.
After programming XIAO ESP32C6, I focused on improving and refining CNC functions performed by Arduino Nano.
📁 HVAC_thermal_camera_CNC.ino
⭐ Include the required libraries.
⭐ Define the 28BYJ-48 stepper motor configurations and initialize the stepper object.
⭐ Define a software serial port (XIAO) since the default (USB) hardware serial port is occupied for debugging.
⭐ Define all of the required CNC commands and step numbers by creating a struct — _CNC — so as to organize and call them efficiently.
⭐ Initiate the defined software serial port to communicate with XIAO ESP32C6.
⭐ In the CNC_motor_move function:
⭐ Rotate the stepper motor of the CNC router to move the thermal camera container head according to the passed step number and the direction.
CW: Clockwise
CCW: Counter-clockwise
⭐ While turning the stepper motor counter-clockwise, check whether the thermal camera container head triggers the micro switch by colliding.
⭐ If so, force the container head to return to the home position. Then, turn the RGB LED to white.
⭐ In the CNC_position_home function, return the thermal camera container head to the home position — 0.
⭐ Obtain the data packet transferred by XIAO ESP32C6 via serial communication.
⭐ Depending on the received CNC coordinate update command, change the thermal camera container head position by rotating the stepper motor by the predefined step number.
⭐ When starting the positioning process, turn the RGB LED to red. After completing the positioning process, turn the RGB LED to yellow.
⭐ Then, send the coordinate update confirmation message — CNC_OK — to XIAO ESP32C6 via serial communication.
⭐ After sending the confirmation message, turn the RGB LED to green.
⭐ After going through four coordinate updates, if XIAO ESP32C6 transmits the zeroing command, return the thermal camera container head to the starting point (zeroing) by estimating the total revolved step number.
⭐ When starting the zeroing process, turn the RGB LED to red. After completing the zeroing process, turn the RGB LED to yellow.
⭐ Then, send the zeroing confirmation message — CNC_OK — to XIAO ESP32C6 via serial communication.
⭐ After sending the zeroing confirmation message, turn the RGB LED to purple.
⭐ Finally, clear the received data packet.
⭐ If the home button is pressed, initiate the container head homing sequence, which returns the container head to the home position (0) by utilizing the micro switch.
After completing the CNC router programming, controlled by Arduino Nano, I focused on improving the remaining XIAO ESP32C6 features, including transferring commands to Arduino Nano and communicating with the web application regarding thermal imaging buffer collection.
As explained in the previous steps, the device performs lots of interconnected features between different development boards and the web application for data collection and running advanced AI models. Thus, the described code snippets show the different aspects of the same code file. Please refer to the code files or the demonstration videos to inspect all interconnected functions in detail.
📁 HVAC_fault_diagnosis_anomalous_sound.ino
⭐ Define all of the required CNC commands and variables by creating a struct — _CNC — so as to organize and call them efficiently.
⭐ Initiate the hardware serial port (Serial1) to communicate with Arduino Nano.
⭐ In the make_a_get_request function:
⭐ Connect to the web application with the configured server settings.
⭐ Create the query string by appending the passed URL query (GET) parameters.
⭐ Make an HTTP GET request with the given URL parameters to the web application.
⭐ Wait until successfully completing the request process.
⭐ In the nano_update_response function:
⭐ Wait until Arduino Nano transfers a data packet via serial communication.
⭐ Then, return the obtained data packet.
⭐ In the thermal_buffer_collection_via_CNC function:
⭐ Initiate the four-step CNC positioning sequence consisting of different CNC commands — from 1 to 4.
⭐ For each CNC positioning command:
⭐ Transfer the given command to Arduino Nano via serial communication.
⭐ Update the buffer operation status indicator to Ongoing on the screen with the associated status icon.
⭐ Wait until Arduino Nano replies with the coordinate update confirmation message (CNC_OK) via serial communication after moving the thermal camera container head to the predefined position.
⭐ After obtaining the confirmation message, update the buffer status indicator to Completed on the screen with the associated status icon.
⭐ After positioning the container head according to the passed CNC command, make an HTTP GET request to the web application (dashboard) in order to make Photon 2 collect and register the associated thermal imaging buffer through the Particle Cloud API.
⭐ If the GET request is successful, update the buffer status indicator to Saved on the screen with the associated status icon.
⭐ Then, increase the command number to resume the positioning sequence.
⭐ After concluding the four-step CNC positioning sequence successfully, return the thermal camera container head to the starting point (zeroing) by transmitting the zeroing command to Arduino Nano via serial communication.
⭐ Wait until Arduino Nano replies with the zeroing confirmation message (CNC_OK) via serial communication after moving the thermal camera container head to the starting point.
⭐ After obtaining the zeroing confirmation message, change all buffer status indicators on the screen to Image Ready.
⭐ After finalizing the CNC positioning sequence and the zeroing procedure, make a successive HTTP GET request to the web application to initiate the thermal image conversion process with the thermal imaging buffers registered on the Particle Cloud.
⭐ If the GET request is successful, halt all processes and redirect the user to the home screen.
⭐ If the third option (CNC Positioning & Thermal Buffer Collection) is activated:
⭐ Clear the previously assigned buffer status indicators.
⭐ Initiate the four-step CNC positioning sequence so as to move the thermal camera container head to the predefined locations for consecutive thermal scan (imaging) buffer collection through the Particle Cloud API.
⭐ Notify the user of each buffer status indicator update by showing their associated status icons on the SSD1306 screen — Waiting, Ongoing, Saved, and Image Ready.
⭐ If the control button D is pressed, redirect the user to the home screen.
After working on the XIAO ESP32C6 data transmission procedure with the web application and the custom CNC router positioning sequence, I focused on developing and improving Particle Photon 2 functions related to thermal imaging buffer collection and registration.
As discussed earlier, I set up the Particle Workbench on Visual Studio Code (VSCode) to be able to utilize the Particle Device OS to program Photon 2. You can inspect the integrated Particle Cloud transmission methods of the Device OS and their limitations from here.
📁 HVAC_fault_diagnosis_thermal_image.cpp
⭐ Include Particle Device OS APIs.
⭐ Include the required libraries.
⭐ Add the icons to be shown on the ST7735 TFT display, which are saved and grouped in the logo.h file.
⭐ Via the built-in Device OS functions, connect to the Particle Cloud automatically.
⭐ Then, enable threading to run the given program (application) and the built-in cloud transmission system (network management) concurrently.
⭐ Define the Particle Cloud variable names and registration status indicators by creating a struct — _thermal — so as to organize and call them efficiently.
⭐ Define the MLX90641 thermal imaging camera configurations, including the 7-bit unshifted device address and the open air shift value.
⭐ To create a specific color algorithm for converting IR array data items to color-based indicators to produce a thermal imaging buffer, define temperature threshold ranges. Then, define the required information to generate a preview (snapshot) thermal image from the produced buffers.
⭐ Configure the ST7735 TFT screen settings.
⭐ Define the required variables for the home screen and the option layouts by creating a struct — _menu — so as to organize and call them efficiently.
⭐ To prevent errors due to threading that manages simultaneous cloud transmission, declare custom application functions before the setup function.
⭐ Assign new variables to the Particle Cloud by utilizing the built-in Particle.variable method.
⭐ Assign new functions to the Particle Cloud by utilizing the built-in Particle.function method.
⭐ Initialize the ST7735 screen with the required configurations.
⭐ Initiate the I2C communication and set the clock speed to 2M to generate accurate thermal scan (imaging) buffers via the MLX90641 thermal imaging camera.
⭐ Check the I2C connection success with the MLX90641 thermal imaging camera and the camera parameter extraction status.
⭐ If the thermal imaging camera operates as expected and the parameter extraction is successful, release the eeMLX90641 array and set the refresh rate to 16 Hz.
⭐ According to the analog joystick movements (UP or DOWN), adjust the highlighted menu option number and the screen update status.
⭐ In the show_interface function:
⭐ According to the passed screen command and the menu option number, show the default home screen or the selected option layout.
⭐ Stop the home screen flickering by showing it for once when requested in the loop.
⭐ If the screen command is scan:
⭐ Show the associated interface icon on the layout.
⭐ Then, display the registration status indicators for each thermal imaging buffer with the assigned icons.
⭐ If the screen command is inspect:
⭐ Show the associated interface icon on the layout.
⭐ If all thermal scan (imaging) buffers are collected and registered successfully:
⭐ Obtain individual data points of each produced thermal buffer by converting them from strings to char arrays.
⭐ For each passed thermal imaging buffer ((16x12) x 4):
⭐ Define the coordinates for the first pixel.
⭐ Starting with the first pixel, draw each individual data point with the color indicator to display an accurate preview thermal image on the screen, estimated by the specific color algorithm based on the defined temperature threshold ranges.
⭐ After drawing a pixel successfully, update the successive data point coordinates.
⭐ If the registered thermal buffers do not meet the requirements, show the blank preview image to notify the user.
⭐ In the get_and_display_data_from_MLX90641 function:
⭐ Get the required variables generated by the MLX90641 thermal imaging camera to calculate the IR array (16x12).
⭐ Estimate the temperature reflection loss based on the sensor's ambient temperature.
⭐ Then, compute and store the IR array.
⭐ Apply the specific algorithm based on the defined temperature ranges to convert each data point of the given IR array to color-based indicators.
⭐ Then, produce the thermal scan (imaging) buffer by appending each evaluated color indicator to the given string variable.
⭐ Finally, return the produced thermal imaging buffer — string.
⭐ After changing the menu option number, highlight the selected option and show the associated icon on the home screen.
⭐ After highlighting a menu option on the home screen, if the joystick button is pressed, navigate to the selected option's layout.
⭐ If the first option (Scan) is activated:
⭐ If the control button OK is pressed, produce a thermal imaging buffer and assign the generated buffer to the predefined string variable linked to the Particle Cloud variable according to the current buffer number — from 0 to 3. Also, update the associated buffer registration status indicator as registered.
⭐ Then, increase the buffer number incrementally.
⭐ After registering thermal buffers, show the buffer status indicators with the assigned icons on the screen to inform the user of the ongoing procedure.
⭐ To avoid flickering, only update the latest changed buffer status indicator.
⭐ If the analog joystick moves to the left, redirect the user to the default home screen.
⭐ If the second option (Inspect) is activated:
⭐ Display the preview thermal image generated from the registered thermal imaging buffers on the layout.
⭐ If the registered thermal buffers do not meet the requirements, show the blank preview image.
⭐ If the control button OK is pressed, clear all registered thermal scan buffers and set their status indicators as blank. Then, remove the latest preview thermal image by displaying the blank one.
⭐ If the analog joystick moves to the left, redirect the user to the default home screen.
⭐ In the collect_thermal_buffers function:
⭐ According to the passed buffer number (from 1 to 4), produce a thermal imaging buffer and assign the generated buffer to the predefined string variable linked to the Particle Cloud variable.
⭐ Also, update the associated buffer status indicator as registered and blink the RGB LED as green to notify the user of the buffer registration success.
⭐ If requested, clear all registered thermal scan buffers and set their status indicators as blank.
⚠️🔊♨️🖼️ If the user presses the home button, Arduino Nano homes the thermal camera container head by employing the micro switch on the CNC router.
⚠️🔊♨️🖼️ When the homing sequence starts, Arduino Nano turns the RGB LED to blue. After returning the container head to the home position (0), it turns the RGB LED to white.
⚠️🔊♨️🖼️ If Particle Photon 2 establishes a successful connection with the Particle Cloud and all connected components operate as expected, the device shows the home screen on the ST7735 TFT display.
Scan
Inspect
⚠️🔊♨️🖼️ The device lets the user adjust the highlighted menu option on the home screen by moving the analog joystick — UP (↑) and DOWN (↓).
⚠️🔊♨️🖼️ After changing the highlighted menu option, the device also updates the icon on the home screen with the assigned option icon.
⚠️🔊♨️🖼️ As a menu option is highlighted, if the joystick button is pressed, the device navigates to the selected option's layout.
⚠️🔊♨️🖼️ Note: If the user moves the joystick to the left, Photon 2 returns to the default home screen.
⚠️🔊♨️🖼️ If the user activates the first menu option — Scan:
⚠️🔊♨️🖼️ The device shows the current buffer registration status indicators with the assigned icons on the screen to inform the user of the ongoing procedure. Then, the device turns the RGB LED to cyan.
⚠️🔊♨️🖼️ The device lets the user manually produce thermal imaging buffers and register the generated buffers to the linked Particle Cloud variables by pressing the control button OK. For each press, the device registers the produced buffer to the Particle Cloud incrementally — from 1 to 4.
⚠️🔊♨️🖼️ After registering a thermal imaging buffer successfully, the device updates the associated buffer status indicator as registered.
⚠️🔊♨️🖼️ If the user activates the third menu option provided by XIAO ESP32C6 — CNC Positioning & Thermal Buffer Collection:
⚠️🔊♨️🖼️ The device shows the buffer operation status indicators with the assigned icons on the SSD1306 OLED display for each thermal imaging buffer as Waiting.
⚠️🔊♨️🖼️ XIAO ESP32C6 transfers the first CNC positioning command to Arduino Nano via serial communication to initiate the four-step CNC positioning sequence. Then, the device updates the first buffer operation status indicator to Ongoing.
⚠️🔊♨️🖼️ After XIAO ESP32C6 sends a CNC positioning command, Arduino Nano informs the user of the positioning process by adjusting the RGB LED color.
Red ➡ command received via serial communication
Yellow ➡ the positioning process is completed
Green ➡ the coordinate update confirmation message — CNC_OK — sent (replied) to XIAO ESP32C6 via serial communication
⚠️🔊♨️🖼️ After receiving the coordinate update confirmation message, XIAO ESP32C6 updates the associated buffer operation status indicator to Completed on the screen.
⚠️🔊♨️🖼️ Then, XIAO ESP32C6 makes an HTTP GET request to the web dashboard in order to make Photon 2 produce a thermal imaging buffer and register the generated buffer to the first linked cloud variable via the Particle Cloud API.
⚠️🔊♨️🖼️ After making the GET request successfully, the device updates the associated buffer operation status indicator to Saved on the screen.
⚠️🔊♨️🖼️ After the web application runs the linked cloud function via the Particle Cloud API, Photon 2 employs the MLX90641 thermal imaging camera to generate a 16x12 IR array.
⚠️🔊♨️🖼️ Then, Photon 2 applies the specific color algorithm to convert the generated IR array to a thermal imaging buffer based on the predefined temperature thresholds.
'w' ➜ White
'c' ➜ Cyan
'b' ➜ Blue
'y' ➜ Yellow
'o' ➜ Orange
'r' ➜ Red
⚠️🔊♨️🖼️ After producing the thermal imaging buffer successfully, Photon 2 turns the RGB LED to green and updates the first buffer registration status indicator to registered.
⚠️🔊♨️🖼️ Since Photon 2 updates the cloud variables automatically as the linked program variables are modified, the produced thermal imaging buffer is registered to the first linked cloud variable automatically.
⚠️🔊♨️🖼️ Then, the device turns off the RGB LED.
⚠️🔊♨️🖼️ Until concluding the four-step CNC positioning sequence and registering all required thermal imaging buffers to the Particle Cloud, the device repeats the same procedure.
⚠️🔊♨️🖼️ After finalizing the four-step CNC positioning sequence, XIAO ESP32C6 transmits the CNC zeroing command to Arduino Nano via serial communication.
⚠️🔊♨️🖼️ Then, Arduino Nano returns the thermal camera container head to the starting point (zeroing) by estimating the total revolved step number.
⚠️🔊♨️🖼️ After XIAO ESP32C6 sends the zeroing command, Arduino Nano informs the user of the zeroing process by adjusting the RGB LED color.
Red ➡ command received via serial communication
Yellow ➡ the zeroing process is completed
Purple ➡ the zeroing confirmation message — CNC_OK — sent (replied) to XIAO ESP32C6 via serial communication
⚠️🔊♨️🖼️ After obtaining the zeroing confirmation message, XIAO ESP32C6 updates all buffer operation status indicators to Image Ready on the screen.
⚠️🔊♨️🖼️ Then, XIAO ESP32C6 makes an HTTP GET request to the web dashboard in order to obtain all thermal imaging buffers registered on the Particle Cloud via the Particle Cloud API.
⚠️🔊♨️🖼️ As discussed in the previous steps, the web dashboard produces a precise thermal image (192 x 192) from the obtained buffers and saves the generated image on the server by running a Python script.
⚠️🔊♨️🖼️ After producing an accurate thermal image with the passed thermal imaging buffers successfully, the web application updates the system log on the MariaDB database accordingly.
⚠️🔊♨️🖼️ Finally, the web application updates its home (index) page automatically to showcase the latest system log entries. In addition to displaying the sample images and the collection dates, the web application lets the user download image samples individually on the home page.
⚠️🔊♨️🖼️ If the user activates the second menu option provided by Photon 2 — Inspect:
⚠️🔊♨️🖼️ The device turns the RGB LED to yellow.
⚠️🔊♨️🖼️ The device draws each individual data point (color-based indicator) of the registered buffers on the ST7735 TFT display to show an accurate preview (snapshot) thermal image.
⚠️🔊♨️🖼️ If the registered thermal buffers do not meet the requirements to produce a thermal image, show the blank preview image to notify the user.
⚠️🔊♨️🖼️ Also, the device lets the user clear all registered thermal scan buffers and set their status indicators as blank by pressing the control button OK. Then, it also removes the latest preview thermal image by displaying the blank one.
After producing thermal images manifesting stable and malfunctioning water-based HVAC system operations, I managed to construct a valid thermal image data set stored on the web application.
As discussed earlier, while collecting audio samples to construct a valid audio data set, I simply differentiated the generated audio samples by the cooling fan failure presence:
normal
defective
After finalizing my audio data set, I started to work on my Audio MFE neural network model to identify anomalous sound emanating from the cooling fans.
Since Edge Impulse provides developer-friendly tools for advanced AI applications and supports almost every development board due to its model deployment options, I decided to utilize Edge Impulse Enterprise to build my Audio MFE neural network model. Also, Edge Impulse Enterprise incorporates state-of-the-art machine learning algorithms and scales them for edge devices such as XIAO ESP32C6.
For sound-based abnormality detection, Edge Impulse provides the required tools for inspecting audio samples, slicing them into smaller windows, and modifying windows to extract features from the supported audio file formats — WAV, MP4, etc.
Even though the Audio MFE processing block extracts time and frequency features from a signal, it employs a non-linear scale in the frequency domain, called Mel-scale. In that regard, the Audio MFE block extracts more features in the lower frequencies and fewer features in the high frequencies, thus it performs exceptionally well for non-voice recognition use cases.
Plausibly, Edge Impulse Enterprise allows building predictive models with enhanced machine learning algorithms optimized in size and precision and deploying the trained model as an Arduino library. Therefore, I was able to build an accurate Audio MFE neural network model to identify anomalous sound originating from the cooling fans and run the optimized model on XIAO ESP32C6 without any additional requirements.
You can inspect my Audio MFE neural network model on Edge Impulse as a public project.
After splitting my audio data set into training and testing samples, I uploaded them to my project on Edge Impulse Enterprise.
After uploading and labeling my training and testing samples successfully, I designed an impulse and trained the model to detect anomalous sound originating from the cooling fans of the water-based HVAC system.
An impulse is a custom machine learning model in Edge Impulse. I created my impulse by employing the Audio (MFE) processing block and the Classification learning block.
The Audio MFE processing block extracts time and frequency features from a signal and simplifies the generated features for non-voice recognition by using a non-linear scale — Mel-scale.
The Classification learning block represents a Keras neural network model. This learning block lets the user change the model settings, architecture, and layers.
According to my prolonged experiments, I modified the neural network settings and architecture to achieve reliable accuracy and validity:
📌 Neural network settings:
Number of training cycles ➡ 100
Learning rate ➡ 0.010
Validation set size ➡ 10
After generating features and training my Audio MFE model, Edge Impulse evaluated the precision score (accuracy) as 100%.
Since I configured this neural network model to conform to the cooling fans of my simplified HVAC system, the precision score (accuracy) is approximately 100%. Thus, I highly recommend retraining the model before running inferences to detect anomalous sound emanating from different HVAC system components.
After building and training my Audio MFE neural network model, I tested its accuracy and validity by utilizing testing samples.
The evaluated accuracy of the model is 100%.
After validating my neural network model, I deployed it as a fully optimized and customizable Arduino library.
As discussed earlier, while producing thermal image samples to construct a valid image data set, I utilized the default classes to label the generated samples, required by Edge Impulse to enable the F1 score calculation:
no anomaly
anomaly
After finalizing my thermal image data set, I started to work on my visual anomaly detection model to diagnose ensuing thermal cooling malfunctions after applying anomalous sound detection to the water-based HVAC system.
Since Edge Impulse provides developer-friendly tools for advanced AI applications and supports almost every development board due to its model deployment options, I decided to utilize Edge Impulse Enterprise to build my visual anomaly detection model. Also, Edge Impulse Enterprise incorporates elaborate model architectures for advanced computer vision applications and optimizes the state-of-the-art vision models for edge devices and single-board computers such as LattePanda Mu.
Since analyzing cooling anomalies based on thermal images of HVAC system components is a complicated task, I decided to employ an advanced and precise machine learning algorithm based on the GMM anomaly detection algorithm enriched with the optimized features of the Edge Impulse FOMO model. Thus, supported by Edge Impulse Enterprise, FOMO-AD is an exceptional algorithm for detecting unanticipated defects by applying unsupervised learning techniques.
Although the FOMO-AD visual anomaly detection model is based on the FOMO algorithm, the FOMO-AD models behave significantly differently than FOMO models. By definition, the FOMO-AD models train only on normal (stable) image samples. Thus, handling unseen data or anomalies is not a challenge since the algorithm does not rely on the existence of training data demonstrating all possible anomalies. However, in this regard, the model accuracy is not calculated during training, and Edge Impulse requires predefined labels (anomaly and no anomaly) to estimate the precision (F1) score by running the model testing process.
Plausibly, Edge Impulse Enterprise allows building advanced computer vision models optimized in size and accuracy efficiently and deploying the trained model as a supported firmware (Linux x86_64) for LattePanda Mu. Therefore, I was able to build an accurate visual anomaly detection model to diagnose thermal cooling malfunctions based on thermal images and run the optimized model on LattePanda Mu without any additional requirements.
You can inspect my FOMO-AD visual anomaly detection model on Edge Impulse as a public project.
After splitting my thermal image data set into training (stable) and testing (thermal malfunction) samples, I uploaded them to my project on Edge Impulse Enterprise.
After uploading and labeling my training and testing samples with the default classes successfully, I designed an impulse and trained the model to diagnose ensuing thermal cooling malfunctions after applying anomalous sound detection to the water-based HVAC system.
An impulse is a custom machine learning model in Edge Impulse. I created my impulse by employing the Image processing block and the FOMO-AD (Images) learning block.
The Image processing block optionally turns the input image format to grayscale or RGB and generates a features array from the passed raw image.
The FOMO-AD (Images) learning block represents a machine learning algorithm that identifies anomalies based on the trained normal (stable) images by applying a Gaussian Mixture Model.
In this case, I configured the input image format as RGB since distinguishing thermal cooling malfunctions based on thermal images highly relies on color differences.
As stated by Edge Impulse, they empirically obtained the best anomaly detection results by applying 96x96 ImageNet weights regardless of the intended raw image input resolution. Thus, I utilized the same resolution for my visual anomaly detection model.
📌 Neural network settings:
Capacity ➡ High
📌 Neural network architecture:
MobileNetV2 0.35
The FOMO-AD learning block has one adjustable parameter — capacity. Higher capacity means a higher number of Gaussian components, increasing the model adaptability considering the original distribution.
As discussed earlier, by definition, Edge Impulse does not evaluate the precision score (accuracy) during training.
After building and training my FOMO-AD visual anomaly detection model, I tested its accuracy and validity by utilizing testing samples.
In addition to validating the model during testing, Edge Impulse evaluates the F1 precision score (accuracy) and provides per region anomalous scoring results for the passed testing images. To tweak the learning block sensitivity, Edge Impulse lets the user change the suggested confidence threshold estimated based on the top anomaly scores in the training dataset. In that regard, the user can adjust the anomaly detection rate according to the expected real-world conditions.
After validating my FOMO-AD model, Edge Impulse evaluated the precision score (accuracy) as 100%.
Since I configured this visual anomaly detection model to conform to the produced thermal images of my simplified HVAC system components, the precision score (accuracy) is approximately 100%. Thus, I highly recommend constructing a new thermal image data set of different HVAC system components and retraining the model before running inferences to diagnose thermal cooling malfunctions.
According to my rigorous experiments, I set the confidence threshold as 5.
Since this classification page provides max and mean anomaly scores, Edge Impulse lets the user compare region anomaly results effortlessly based on the altered confidence thresholds.
After setting the confidence threshold, I deployed my visual anomaly detection model as a fully optimized and customizable Linux (x86_64) application (.eim).
Since Edge Impulse optimizes and formats signal processing, configuration, and learning blocks into a single package while deploying models as Arduino libraries, even for complex machine learning algorithms, I was able to import my advanced model effortlessly to run inferences on XIAO ESP32C6.
After importing my model successfully, I programmed XIAO ESP32C6 to run inferences to identify anomalous sound emanating from the cooling fans.
However, the Arduino IDE kept throwing a compile error message as shown below during my initial experiments.
\Arduino\libraries\AI-driven_HVAC_Fault_Diagnosis_Audio__inferencing\src\edge-impulse-sdk\classifier\ei_classifier_config.h*
As explained in the previous steps, the device performs lots of interconnected features between different development boards and the web application for data collection and running advanced AI models. Thus, the described code snippets show the different aspects of the same code file. Please refer to the code files or the demonstration videos to inspect all interconnected functions in detail.
📁 HVAC_fault_diagnosis_anomalous_sound.ino
⭐ Define the required parameters to run an inference with the Edge Impulse Audio MFE neural network model.
⭐ Define the threshold value for the model outputs (predictions).
⭐ Define the anomalous sound (audio) class names.
⭐ In the run_inference_to_make_predictions function:
⭐ Summarize the Edge Impulse neural network model (Audio MFE) inference settings and print them on the serial monitor.
⭐ If the I2S microphone generates a raw audio (data) buffer successfully:
⭐ Create a signal object from the resized (scaled) raw data buffer — raw audio buffer.
⭐ Run an inference.
⭐ Print the inference timings on the serial monitor.
⭐ Obtain the prediction results for each label (class).
⭐ Print the model classification results on the serial monitor.
⭐ Get the predicted label (class) explicitly based on the given threshold.
⭐ Print inference anomalies on the serial monitor, if any.
⭐ Release the previously generated raw audio buffer if requested.
⭐ In the microphone_audio_signal_get_data function:
⭐ Convert the given microphone (raw audio) data (buffer) to the out_ptr format required by the Edge Impulse neural network model (Audio MFE).
⭐ If the second option (Faulty Sound) is activated:
⭐ Every five seconds, run an inference with the Edge Impulse Audio MFE neural network model.
⭐ If the given model detects anomalous sound originating from the cooling fans:
⭐ Clear the previously assigned buffer operation status indicators.
⭐ Start the four-step CNC positioning sequence to collect thermal imaging buffers and produce a precise thermal image.
⭐ If the control button D is pressed, redirect the user to the home screen.
Since Edge Impulse optimizes and formats signal processing, configuration, and learning blocks into a single EIM file while deploying models as a Linux (x86_64) application, even for complex visual anomaly detection models, I was able to import my advanced FOMO-AD model effortlessly to run inferences in Python on LattePanda Mu (x86 Compute Module).
After importing the generated Linux application successfully, I programmed LattePanda Mu to run inferences to diagnose thermal cooling malfunctions of HVAC system components based on specifically produced thermal images.
Since I described all of the web application features earlier, including the Python script handling thermal image generation, running the visual anomaly detection model, and modifying the resulting image based on the visual anomaly grid with estimated cell anomaly intensity levels, please refer to Step 5.2 to inspect code snippets.
⚠️🔊♨️🖼️ If the user activates the second menu option provided by XIAO ESP32C6 — Faulty Sound:
⚠️🔊♨️🖼️ XIAO ESP32C6 generates a raw audio buffer via the I2S microphone and runs an inference with the Edge Impulse Audio MFE neural network model every five seconds to identify anomalous sound originating from the HVAC system cooling fans.
⚠️🔊♨️🖼️ Then, the device shows the detected audio class and its associated icon on the SSD1306 OLED display.
⚠️🔊♨️🖼️ If the Audio MFE detects anomalous sound, XIAO ESP32C6 initiates the four-step CNC positioning sequence to collect thermal imaging buffers and produce a precise thermal image via the web application.
⚠️🔊♨️🖼️ Except for the passed process type (detection) while making an HTTP GET request to the web application, the process is exactly the same for producing a thermal image sample as explained in Step 8.d.
⚠️🔊♨️🖼️ After obtaining the thermal scan (imaging) buffers registered on the Particle Cloud, the web application produces a precise thermal image by running the Python script as shown in Step 8.d.
⚠️🔊♨️🖼️ However, instead of saving the produced image as a sample directly, the web application runs an inference with the Edge Impulse FOMO-AD visual anomaly detection model to diagnose consecutive thermal cooling malfunctions of HVAC system components after anomalous sound detection.
⚠️🔊♨️🖼️ Since the FOMO-AD model categorizes a passed image by individual cells (grids) based on the dichotomy between two predefined classes (anomaly and no anomaly), the web application utilizes the mean visual anomaly value to diagnose overall (high-risk) thermal cooling malfunctions based on the given confidence threshold.
⚠️🔊♨️🖼️ If the mean visual anomaly value is smaller than the given threshold, the web application saves the model resulting image directly by adding the prediction date to the file name.
normal__2024_06_21_18_08_41.jpg
⚠️🔊♨️🖼️ Otherwise, the web application obtains the visual anomaly grid, consisting of individual cells with coordinates, assigned labels, and anomaly scores.
⚠️🔊♨️🖼️ If a cell's assigned label is anomaly, the web application calculates the cell's anomaly intensity level — Low (L), Moderate (M), High (H) — in relation to the given threshold.
⚠️🔊♨️🖼️ Then, the web application modifies the resulting image with the cells (rectangles) and their intensity levels, emphasizing the risk of component abnormalities.
⚠️🔊♨️🖼️ After modifying the resulting image, the web application saves it by adding the prediction date to the file name.
malfunction__2024_06_21_18_22_19.jpg
⚠️🔊♨️🖼️ After generating and saving the model resulting image, the web application updates the system log on the MariaDB database accordingly.
⚠️🔊♨️🖼️ Then, the web application updates its home (index) page automatically to showcase the latest system log entries. In addition to displaying the model resulting images, anomalous sound notifications, thermal cooling malfunction status, and prediction dates, the web application lets the user download modified resulting images individually on the home page.
⚠️🔊♨️🖼️ Furthermore, the web dashboard allows the user to change the system log update category to display only the associated system log entries.
All
Cooling Malfunction Detections
Thermal Image Samples
Anamolous Sound Samples
⚠️🔊♨️🖼️ Finally, the web application sends an SMS via Twilio to inform the user of anomalous sound detection and the status of the ensuing thermal cooling malfunctions of HVAC system components.
⚠️🔊♨️🖼️ If the user activates the second menu option provided by Photon 2 — Inspect:
⚠️🔊♨️🖼️ Similar to the image sample generation, the device draws each individual data point (color-based indicator) of the registered buffers on the ST7735 TFT display to show an accurate preview (snapshot) thermal image.
⚠️🔊♨️🖼️ Also, XIAO ESP32C6 prints progression notifications on the serial monitor for debugging.
While conducting experiments with this HVAC system malfunction detection device, I added an aquarium heater to the water reservoir to artificially increase the temperature of the water circulating in the closed-loop system. In that regard, I was able to simulate and diagnose thermal cooling malfunctions of HVAC system components (aluminum blocks) after identifying anomalous sound originating from the cooling fans.
By applying advanced AI-powered multi-algorithm detection methods to identify anomalous sound emanating from the cooling fans and diagnose ensuing thermal cooling malfunctions of water-based HVAC system components, we can achieve to:
⚠️🔊♨️🖼️ prevent pernicious cooling aberrations deteriorating heat regulation for industrial facilities,
⚠️🔊♨️🖼️ avert production downtime for demanding industrial processes,
⚠️🔊♨️🖼️ reduce production costs and increase manufacturing efficiency,
⚠️🔊♨️🖼️ obviate excessive energy consumption via real-time (automated) malfunction diagnosis,
⚠️🔊♨️🖼️ extend equipment lifespan by maintaining stable heat transfer,
⚠️🔊♨️🖼️ deter exorbitant overhaul processes due to prolonged negligence, leading to a nosedive in production quality.
As a developer, if you want to inspect or access code files and custom design files effortlessly, you can visit the project's GitHub repository, providing:
Code files
Gerber files
STL files
Custom libraries
Edge Impulse machine learning models — Audio MFE and FOMO-AD
[^1] What is an Industrial Water-Based Cooling System and How Does it Work?, ChemREADY, https://www.getchemready.com/water-facts/what-is-an-industrial-water-based-cooling-system-and-how-does-it-work/.
[^2] Industrial water cooling systems-chillers, Atlas Copco, https://www.atlascopco.com/en-uk/compressors/products/industrial-water-cooling-systems.
[^3] Jian Bi, Hua Wang, Enbo Yan, Chuan Wang, Ke Yan, Liangliang Jiang, Bin Yang, AI in HVAC fault detection and diagnosis: A systematic review, Energy Reviews, Volume 3, Issue 2, 2024, https://doi.org/10.1016/j.enrev.2024.100071.
[^4] Herbert W. Stanford III, HVAC Water Chillers and Cooling Towers: Fundamentals, Application, and Operation, CRC Press, 2nd edition, March 29, 2017, Page: 16-28, https://books.google.com.tr/books?hl=en&lr=&id=KDteO_-GaLkC&oi=fnd&pg=PR1#v=onepage&q&f=false.
Huge thanks to ELECROW for sponsoring this project with their high-quality PCB manufacturing service and for sending me a CrowVision 11.6'' TouchScreen Module (1366x768).
Huge thanks to Seeed Studio for sponsoring these products:
Huge thanks to DFRobot for sponsoring these products:
Also, huge thanks to Anycubic for sponsoring an Anycubic Kobra 2.
Since XIAO ESP32C6 is a feature-rich development board providing an I2S port, I was able to connect a Fermion I2S MEMS microphone to collect raw audio buffers easily. Nevertheless, after conducting some experiments, I noticed the produced audio buffers were noisy or completely inaccurate. Therefore, I added additional resistors to the WS (+20K) and DO (+220Ω) pins of the I2S microphone. Then, I managed to obtain precise raw audio buffers.
To provide the user with a feature-packed interface, I connected an SSD1306 OLED display and four control buttons to XIAO ESP32C6. I also connected an RGB LED to Arduino Nano to inform the user of the CNC router status while performing operations according to the CNC commands transferred by XIAO ESP32C6.
Since Arduino Nano operates at 5V and XIAO ESP32C6 requires 3.3V logic level voltage, their pins cannot be connected directly, even for serial communication. Therefore, I utilized a bi-directional logic level converter to shift the voltage for the connections between XIAO ESP32C6 and Arduino Nano.
To control the CNC router effortlessly, I connected a 28BYJ-48 stepper motor to Arduino Nano via its built-in ULN2003 driver module. Since I wanted to implement automatic homing sequence to the CNC router, I connected a micro switch with pulley (JL024-2-026) to Arduino Nano, similar to a 3D printer switch.
Since the 28BYJ-48 stepper motor can be current-demanding on full load, I connected an additional 5V battery to supply the stepper motor without damaging other components.
Since Particle Photon 2 is a capable IoT development board providing Particle Cloud compatibility out of the box, I was able to set up cloud variables and functions effortlessly to communicate with Photon 2 via the Particle Cloud API through the web dashboard.
To obtain accurate thermal scan (imaging) buffers, I connected an MLX90641 thermal imaging camera to Photon 2 via a Grove 4-pin connection cable. Since the MLX90641 camera produces 16x12 IR arrays (buffers) with fully calibrated 110° FOV (field-of-view), I was able to generate considerably large thermal images by combining four sequential buffers and adjusting pixel size.
Although Photon 2 is a powerful development board, it is not suitable for generating thermal images, saving image samples, and running a demanding visual anomaly detection model simultaneously due to memory limitations. Therefore, the web dashboard, hosted by LattePanda Mu, handles all of the mentioned operations after Photon 2 registers the produced thermal scan (imaging) buffers to the associated Particle Cloud variables.
To provide the user with a feature-rich interface, I connected an ST7735 TFT display and a COM-09032 analog joystick to Photon 2. I also added an RGB LED to inform the user of the device status while performing operations related to thermal buffer collection and registration.
First, I attached 120 mm RGB case fans to the aluminum radiator via M3 screws and nuts.
Then, I attached a terminal input female DC barrel jack to the water pump and connected two aluminum cooling blocks via plastic tubing.
I created the closed-loop water cooling system by making connections via plastic tubing respectively:
Finally, I fastened the water pump into the custom water reservoir and passed the cooling system IN and OUT tubings through the built-in plastic fittings on the reservoir top cover. Since I utilized TPU flexible filament to print the custom water cooling parts, I did not encounter any issues while connecting plastic tubings or circulating water through the system.
First, I attached the Kyogre PCB to its unique encasement affixed to the right radiator mount.
Then, I made the required connections between the ULN2003 driver board and the Kyogre PCB via jumper wires.
I fastened the micro switch (JL024-2-026) to its connector attached to the left CNC stand and made the required connections between the micro switch and the Kyogre PCB via jumper wires.
I attached the Groudon PCB to its unique encasement affixed to the right CNC stand.
I fastened the MLX90641 thermal imaging camera to its slot on the thermal camera container head via the hot glue gun. Then, I made the required connections between the thermal imaging camera and the Groudon PCB by extending the Grove 4-pin connection cable via jumper wires.
I attached the radiator to the radiator mounts in a tilted position and placed the aluminum cooling blocks under the custom CNC router, aligning the thermal imaging camera position.
While conducting experiments with the completed HVAC system, I noticed the custom reservoir started leaking after changing color. I assume the reason is that the color-changing additives in the TPU filament slightly distort the infill shape of the bottom of the 3D-printed reservoir. Thus, I employed a glass jar as the reservoir to replace the leaking one.
To showcase the web dashboard, I connected the CrowVision 11.6'' touchscreen module to LattePanda Mu via an HDMI to Mini-HDMI cable. Since I placed the Lite Carrier board into its custom flexible case, I did not encounter any issues while connecting peripherals to LattePanda Mu.
First of all, sign up for Twilio and navigate to the Account page to utilize the default (first) account or create a new account.
After verifying a phone number for the selected account (project), set the initial account settings for SMS in PHP.
To configure the SMS settings, go to Messaging ➡ Send an SMS.
Since a virtual phone number is required to transfer an SMS via Twilio, click Get a Twilio number.
After obtaining the free virtual phone number, download the Twilio PHP Helper Library to send an SMS via the web dashboard.
Finally, go to Geo permissions to adjust the allowed recipients depending on your region.
After configuring the required settings, go to Account ➡ API keys & tokens to get the account SID and the auth token under Live credentials to be able to employ Twilio's SMS API to send SMS.
First, download the XAMPP Linux installer.
After downloading the XAMPP installer, change its permissions via the terminal (command line).
Then, execute the XAMPP installer via the terminal.
After configuring the required settings via the installer, run the XAMPP application (lampp) via the terminal.
Since the XAMPP development environment does not create a shortcut on Linux, you always need to use the terminal to launch XAMPP (lampp) unless you enable autostart.
First, create the web application folder under the lampp folder and change its permissions via the terminal to be able to generate, open, and save files.
Since we need to edit the sudoers file to change user privileges, open the terminal and utilize the visudo command to alter the sudoers file safely.
Since the XAMPP application (lampp) employs daemon as the user name, add these lines to the end of the sudoers file to enable the web application to run the sudo command without requiring a password.
First, install the OpenCV module required to generate and modify thermal images.
To run Edge Impulse machine learning models on LattePanda Mu, install the Edge Impulse Linux Python SDK via the terminal.
After setting up the XAMPP application (lampp) on LattePanda Mu, open the phpMyAdmin tool on the browser manually to create a new database named hvac_system_updates.
After adding the database successfully, go to the SQL section to create a MariaDB database table named system_log with the required data fields.
First, remove the Arduino-ESP32 board package if you have already installed it on the Arduino IDE.
Then, go to Preferences ➡ Additional Boards Manager URLs and add the official development version URL for the Arduino-ESP32 board package:
To install the required core, navigate to Tools ➡ Board ➡ Boards Manager, search for esp32, and select the latest development release — 3.0.0-rc1.
After installing the core, navigate to Tools ➡ Board ➡ ESP32 Arduino and select XIAO_ESP32C6.
Download and inspect the required libraries for the components connected to XIAO ESP32C6:
If the Arduino IDE shows the correct port number but fails to upload the given code file, push and release the RESET button while pressing the BOOT button. Then, XIAO ESP32C6 should accept the uploaded code in the BootLoader mode.
First, download Visual Studio Code (VSCode) from the official installer.
After installing VS Code, go to Extensions Marketplace and search for the Particle Workbench extension.
While downloading the Workbench extension, VSCode should install and build all dependencies automatically, including the device toolchain, C++ extension, Particle CLI, etc.
After downloading the Workbench extension, go to the Command Palette and select Particle: Create New Project. Then, enter the project directory name.
First, open the Particle setup wizard on the browser.
After initiating the setup process, the wizard requests the user to create a Particle account.
After creating a new account, connect Particle Photon 2 to the computer through the USB port and resume the setup process.
Then, the setup wizard should recognize Photon 2 (P2) and fetch the board information automatically.
After getting the board information, the setup wizard updates Photon 2 to the latest Device OS and firmware.
After updating Photon 2, create a new product (device group) and add Photon 2 to the created product with a unique name — hvac_control.
Connect Photon 2 to a Wi-Fi network in order to enable data transmission with the Particle Cloud.
Finally, go to the Particle Console to check whether the Cloud connection is established successfully.
After setting up Photon 2 successfully via the web-based setup wizard, return to the Workbench extension and select Particle: Configure Project for Device on the Command Palette.
Choose the compatible device OS version and select the target platform — Photon 2 / P2.
Then, obtain the device ID from the Particle Console and enter it on the Workbench extension to enable extra features, such as cloud compiling.
First, search for the required library on the Particle libraries ecosystem via the Library search tool.
If there is a supported version of the library in the ecosystem, go to the Workbench Welcome Screen and click Code ➜ Install library.
Then, enter the library name to install the given library with all dependencies.
After signing in to your account, go to the web-based token generation tool, enter the expiration time, and create a new user access token.
To be able to display images (icons), first convert image files (PNG or JPG) to monochromatic bitmaps. Then, convert the generated bitmaps to compatible C data arrays. I decided to utilize LCD Assistant to create C data arrays.
After installing LCD Assistant, upload a monochromatic bitmap and select Vertical or Horizontal, depending on the screen type.
Then, save all the converted C data arrays to the logo.h file.
As mentioned earlier, the string variables are linked to the Particle Cloud variables. Since Photon 2 updates the cloud variables automatically when the linked variables are modified, do not forget to add delays in while loops. Otherwise, the while loop interrupts and blocks the Particle Cloud network connection (threading).
Do not forget to add delays in while loops. Otherwise, the while loop interrupts and blocks the Particle Cloud network connection (threading).
As discussed earlier, this function is linked to a Particle Cloud function. Thus, the Particle Cloud API can access and execute the given function remotely.
Since XIAO ESP32C6 communicates with the web application (dashboard) to handle the thermal imaging buffer collection in sync with the four-step CNC positioning sequence, the following descriptions show features performed by XIAO ESP32C6 and Photon 2 in tandem.
First of all, to utilize the incorporated tools for advanced AI applications, sign up for Edge Impulse Enterprise.
Then, create a new project under your organization.
Navigate to the Data acquisition page and click the Upload data icon.
Choose the data category (training or testing) and select WAV audio files.
Utilize the Enter Label section to label the passed audio samples automatically with the same class in the file names.
Then, click the Upload data button to upload the labeled audio samples.
Go to the Create impulse page and leave Window size and Window increase parameters as default. In this case, I did not need to slice the passed audio samples since all of them have roughly one-second duration.
Before generating features for the Audio MFE model, go to the MFE page to configure the block settings if necessary.
Since the MFE block transforms a generated window into a table of data where each row represents a range of frequencies and each column represents a span of time, you can configure block parameters to adjust the frequency amplitude to change the MFE's output — spectrogram.
After inspecting the generated MFE parameters, I decided to utilize the default settings since my audio samples are simple and do not require precise tuning.
Click Save parameters to save the calculated MFE parameters.
After saving parameters, click Generate features to apply the MFE signal processing block to training samples.
Finally, navigate to the Classifier page and click Start training.
To validate the trained model, go to the Model testing page and click Classify all.
To deploy the validated model as an Arduino library, navigate to the Deployment page and search for Arduino library.
Then, choose the Quantized (int8) optimization option to get the best performance possible while running the deployed model.
Finally, click Build to download the model as an Arduino library.
First of all, to utilize the incorporated tools for advanced AI applications, sign up for Edge Impulse Enterprise.
Then, create a new project under your organization.
To be able to label image samples manually on Edge Impulse for FOMO-AD visual anomaly detection models, go to Dashboard ➡ Project info ➡ Labeling method and select One label per data item.
Navigate to the Data acquisition page and click the Upload data icon.
Distinguish image samples as training and testing samples depending on the presence of anomaly (malfunction).
Choose the data category (training or testing) and select the associated image files.
Utilize the Enter Label section to label the passed image samples automatically with the required class — no anomaly for training and anomaly for testing.
Then, click the Upload data button to upload the labeled image samples.
Go to the Create impulse page and set image width and height parameters to 96. Then, select the resize mode parameter as Fit shortest axis so as to scale (resize) given training and testing image samples.
Select the Image processing block.
Then, to choose the visual anomaly detection algorithm, click Add a learning block and select the FOMO-AD (Images) learning block. Finally, click Save Impulse.
Before generating features for the visual anomaly detection model, go to the Image page and set the Color depth parameter as RGB. Then, click Save parameters.
After saving parameters, click Generate features to apply the Image processing block to training image samples.
After generating features successfully, navigate to the FOMO-AD page and click Start training.
According to my prolonged experiments, I modified the neural network settings and architecture to achieve reliable accuracy and validity:
To validate the trained model, go to the Model testing page and click Classify all.
Then, click the Gear icon and select Set confidence thresholds to tweak the learning block sensitivity to adjust the anomaly detection rate based on the expected real-world conditions.
After setting the confidence threshold, select a testing image sample and click Show classification to inspect the detected label and the per region anomalous scoring results.
To deploy the validated model as a Linux (x86_64) application, navigate to the Deployment page and search for Linux (x86).
Then, choose the Quantized (int8) optimization option to get the best performance possible while running the deployed model.
Finally, click Build to download the model as a Linux (x86_64) application (.eim).
After downloading the model as an Arduino library in the ZIP file format, go to Sketch ➡ Include Library ➡ Add .ZIP Library...
Then, include the AI-driven_HVAC_Fault_Diagnosis_Audio__inferencing.h file to import the Edge Impulse Audio MFE neural network model.
To solve the mentioned compiling error, open the ei_classifier_config.h file and set EI_CLASSIFIER_TFLITE_ENABLE_ESP_NN to 0.
The process is the same for generating a sample thermal image — the third option — except for the passed process type (GET request parameter). This time, the web application utilizes the produced thermal image to run an inference with the Edge Impulse FOMO-AD visual anomaly detection model and generate a model resulting image.
After downloading the generated Linux (x86_64) application to the model folder under the root folder of the web application, make sure to change the file permissions via the Properties tab to be able to execute the model file. As shown earlier, you can also use the terminal (shell) to change file permissions.