Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Researching the relationship between temperature, humidity, and asthma with wearable tech on the Arduino Nicla x K-Way smart jacket.
Created By: Nick Bild
Public Project Link: https://studio.edgeimpulse.com/public/148301/latest
GitHub Repository: https://github.com/nickbild/environmental_asthma_risk
The US economy takes a hit of more than 80 billion dollars annually due to both direct and indirect effects of asthma. This comes in the form of medical expenses, missed work days, and even death. Some of these impacts can be prevented by avoiding asthma triggers and taking preventative measures, like using prescribed inhalers or limiting physical activity when at risk.
One thing that may help in understanding when someone is at high risk for an asthma attack is monitoring environmental risk factors. It has been noted, for example, that factors like temperature and humidity increase the number of emergency department visits for the condition. But the exact relationships between weather conditions and asthma flare ups are not entirely clear.
I decided to build a machine learning model and train it to understand the relationship between temperature, humidity and emergency department visits for asthma. Since a device running this model would need to always be with the person being monitored, making it into a wearable gadget makes a lot of sense. As it turns out, K-Way and Arduino recently teamed up to make a smart jacket that looks like the perfect platform to build my idea on, so I gave it a try.
Note that this device has not been validated clinically, nor has it been approved by the FDA or any other regulatory agency. It is a proof of concept and cannot be used to make health-related decisions.
1x K-Way jacket with integrated Arduino Nicla Sense ME
Edge Impulse Studio
Arduino IDE
The K-Way jacket is instrumented with an Arduino Nicla Sense ME. The Arduino is housed inside a tiny case and comes wired to a rechargeable LiPo battery. This hardware platform was designed with tinyML in mind, with an Arm Cortex M4 CPU operating at 64 MHz to run local inferences, and a slew of sensors, including a number of motion and environmental sensors.
One of the sensors on this board collects temperature and humidity measurements, then passes them into a machine learning model that I built with Edge Impulse Studio that predicts the number of emergency department visits for asthma that would be expected under those conditions. If that number exceeds a certain threshold, that is considered a high-risk day for asthma flare ups, and a message is sent to the jacket wearer’s smartphone to give them a heads up so they can take appropriate action.
I located a dataset that provides hourly weather and asthma emergency department visit metrics for an entire year. I processed this data with a simple Python script to create CSV files compatible with Edge Impulse. These CSV files were uploaded to my project in Edge Impulse Studio using the data acquisition tool.
I designed a simple impulse that feeds the previously uploaded training data into a regression model that learns to translate temperature and humidity into a prediction of the number of emergency department visits expected for the hour under those conditions.
Model testing showed that the model was accurate over 73% of the time.
Edge Impulse offers many options for deployment, but in my case the best option was the "Arduino library" download. This packaged up the entire classification pipeline as a compressed archive that I could import into Arduino IDE, then modify as needed to add my own logic. That allowed me to send Bluetooth Low Energy messages when certain thresholds were met. The Arduino sketch is available here.
The K-Way/Arduino smart jacket is an interesting new platform for tinyML applications. By embedding the hardware in a jacket, it does not require any special effort on the part of the wearer to bring their intelligent algorithms along with them wherever they go. This type of always-available platform is very promising for health-related applications, and worked quite well for my prototype. I hope to see devices like this be clinically validated in the future so that they can make a real difference in people’s day-to-day lives.
Use a computer vision model to determine occupancy in rooms and adapt HVAC zone output accordingly.
Created By: Jallson Suryo
Public Project Link: https://studio.edgeimpulse.com/public/215243/latest
GitHub Repo: https://github.com/Jallson/Smart_HVAC
A common problem found in HVAC systems is that energy is wasted, because the system uses more energy than necessary, or the system cannot quickly adjust to the changing needs in a dynamic environment. To tackle the problem, we need a system that manages its power intensity based on what is necessary for each zone in real-time for a given environment. The power intensity necessary for each zone can be derived from the following data: number of people, time or duration spent inside, and/or the person's activity.
To overcome this challenge, a Smart HVAC System that can optimize energy consumption by adjusting the power intensity in different zones inside an office or a residential space (zones with more people, more activity, and longer time durations will need more cooling/heating and vice versa) could be created. The zone heat mapping will be generated using data obtained from an Arduino Nicla Vision (with Edge Impulse's FOMO Machine Learning model embedded) that's mounted like a surveillance camera inside the space.
The project uses Edge Impulse's FOMO to detect multiple objects and its coordinates using a compact micro-controller with an on-board camera (the Nicla Vision). The object detection ML model will use the top view of miniature figures with standing and sitting positions as objects. The data captured will be divided into Training and Test data. Then the Impulse with Image and Object Detection as learning blocks and grayscale color blocks will be created.
The accuracy result for this training and test model is above 90%, so there is a higher degree of confidence when counting the number of objects (persons) and tracking their centroid coordinates.
The ML model is then deployed to the Nicla Vision. The number of objects in each zone is displayed on an OLED display. The Nicla Vision also communicates to an Arduino Nano via I2C which we are using for the fan speed controller.
This system will increase fan intensity on areas/zone that need more cooling/heating, which means more activity/people in a certain zone will increase fan intensity in that zone. The total HVAC power output can also be adjusted based on the total number of people in the space.
The project is a proof of concept (PoC) using a 1:50 scale model with an office interior with several partitions and furniture and miniature figures. The space is divided into 4 zones, and each zone has a small fan installed. The OLED display is used in this PoC to show the output of this simulation.
Arduino Nicla Vision, aluminium extrusion frame, and 3D printed miniature model (1:50)
System Diagram, prototyping in breadboard, and Smart HVAC System with custom design PCB
Arduino Nicla Vision
Arduino Nano
2x TB6612 Motor drivers
4x DC 5V mini fan 3cm
0.96inch OLED display
Aluminium extrusion as a camera stand
3D printed (case, office interior 1:50 miniature)
Powerbank & battery for Nicla Vision
Edge Impulse Studio
Arduino IDE
OpenMV IDE
In this project we will use a smartphone camera to capture the images for data collection for ease of use. Take pictures from above in different positions with backgrounds of varying angles and lighting conditions to ensure that the model can work under slightly different conditions (to prevent overfitting). Lighting and object size are crucial aspects to ensure the performance of this model.
Note: Keep the size of the objects similar in size in the pictures. Significant difference in object size will confuse the FOMO algorithm.
Open studio.edgeimpulse.com, login (or create an account first), then create a new Project.
Choose the Images project option, then Classify Multiple Objects. In Dashboard > Project Info, choose Bounding Boxes for labelling method and Nicla Vision for target device. Then in Data acquisition, click on Upload Data tab, choose your photo files, auto split, then click Begin upload.
Click on Labelling queue tab then start drag a box around an object and label it (person) and Save. Repeat until all images are labelled.
Make sure that the ratio between Training and Test data is ideal, around 80/20.
Once you have the dataset ready, go to Create Impulse and set 96 x 96 as the image width - height (this helps in keeping the ML model small, to fit within the Nicla's memory size). Then choose Fit shortest axis, and choose Image and Object Detection as learning and processing blocks.
Go to Image parameter section, select color depth as Grayscale, then press Save parameters. Then click on Generate and navigate to Object Detection section, and leave training settings for the Neural Network as it is — in our case the defaults work quite well, then we choose FOMO (MobileNet V2 0.35). Train the model by pressing the Start training button. You can see the progress on the right side.
If everything is OK, then we can test the model. Go to Model Testing on the left, then click Classify all. Our result is above 90%, then we can move on to the next step -- deployment.
To use the OpenMV firmware, you will need the OpenMV IDE installed on your computer. Once you have the IDE ready, if you check the downloaded .zip folder you will find a number of files. We will need the following files:
edge_impulse_firmware_arduino_nicla_vision.bin and ei_object_detection.py.
The next step is loading the downloaded firmware containing the ML model to the Nicla Vision board. So go back to OpenMV and go to Tools -> Run Bootloader (Load Firmware), select the .bin file in the unzipped folder, and click Run.
Next, we will run the python code. Go to File -> Open File and select the .py python file from the unzipped folder. Once the file is opened, connect the Nicla Vision board, select the serial/com port and click the green “play” button. The program should be running now and you can see the FOMO object detection will run in a small window.
You should have the Arduino IDE installed on your computer for the following step. Once the Edge Impulse Arduino Firmware is built, downloaded and unzipped, you should download the nicla_vision_camera_smartHVAC_oled.ino code which can be downloaded here and place it inside the unzipped folder from Edge Impulse. Once the .ino code is inside Edge Impulse unzipped folder, move it to your Arduino folder on your computer. Now you can upload the .ino code to your Nicla Vision board via the Arduino IDE.
The .ino code is a modified version of the Edge Impulse example code for object detection on Nicla Vision. The modification adds capability to display person count on each room to the OLED screen and act as the controller to the Arduino Nano I2C peripheral. The code distinguishes the four rooms using four quadrants and by knowing the X, Y coordinates of the object’s centroid we can locate the person. The Arduino Nano adjusts the fan motor using PWM based on the number of persons present in the room.
The code for the Arduino Nano peripheral Nano_SmartHVAC_I2C_Peripheral.ino can be downloaded here.
Here is a quick prototype video showing the project:
Finally, we have successfully implemented this object detection model on an Arduino Nicla Vision, and use the data captured from the camera to automatically control the HVAC system's fan power intensity and display the occupancy number and power meter for each zone. I believe this Proof of Concept project can be implemented in a real-world HVAC system, so that the goal of optimizing room temperature and saving energy can be achieved for a better, more sustainable future.
Complete demo video showing the project:
Using a Nordic Thingy:91 to detect the presence of gases such as CO2, CO, and Isopropanol.
Created By: Zalmotek
Public Project Link:
https://studio.edgeimpulse.com/studio/127759
GitHub Repository:
https://github.com/Zalmotek/edge-impulse-gas-detection-thingy-91-nordic
Gas detection is critical for ensuring the safety of workers in the oil and gas industry. Gas leaks can occur at any stage of production, from drilling and refining to transportation and storage. Gas sensors must be able to detect a wide range of gasses, including combustible gasses like methane and propane, as well as toxic gasses like carbon monoxide and hydrogen sulfide.
Gas detection systems have traditionally been based on point sensors, which are placed at strategic locations throughout a facility. These sensors are connected to a central control panel, which monitors the gas concentration in each location. If a gas leak is detected, an alarm is sounded and workers are evacuated from the area.
Even if they are the industry standard, traditional gas detection systems based on point sensors have several limitations.
They are limited by their space density, as they can only detect gas in the areas where they are positioned, meaning that leaks in other parts of the facility may go undetected.
They are limited by network coverage. Many wireless communication-dependent sensors are prone to malfunction during severe weather events, leaving unsupervised locations in the facility.
These systems rely on human operators to evaluate the data and take action.
To overcome those challenges, we propose a solution based on the Nordic Semi Thingy:91, an IoT multi-sensor device, equipped with an environmental sensor suite, LTE connectivity, and full compatibility with Edge Impulse machine learning models.
Thingy:91 is a complete prototype development platform for cellular IoT applications. It’s an easy-to-use prototyping tool that lets you quickly and cheaply try out your ideas and iterate until you have a working product. The onboard environmental sensor suite includes temperature, humidity, atmospheric pressure, color, light intensity, and UV index sensors. The onboard air quality sensor can detect a wide range of gasses, including methane, carbon monoxide, and hydrogen sulfide.
Gas sensor data is often noisy and unreliable, making it difficult to interpret. Machine learning algorithms can be used to filter out false readings and identify patterns in the data that indicate the presence of certain gasses.
By using the Thingy:91 as an edge device, it can measure the gas concentration, run a machine learning algorithm and decide if there is a trend in the quantity of dangerous gasses in the air and act out without human intervention either by sounding an alarm, either by sending a message to the nearby employees that might be in danger.
A monitoring system based on the Nordic:91 could be used in two manners, each with its own advantages and disadvantages:
As a static point sensor.
This is advantageous when the area in which the sensor is placed is inaccessible via cable connection and is prone to connectivity problems. Gas detection is often performed in remote or difficult-to-access locations, making it impractical to send data to the cloud for analysis. Edge computing allows data to be processed locally, in real time, by using machine learning algorithms.
As a wearable.
By using it as a wearable, it will be able to detect gas leakages in the proximity of the employees that wear it. This is a great way of overcoming the space density problem, as it will monitor the air quality in the near vicinity of the employees, ensuring that they will not find themselves in an environment that might be dangerous for their health. Depending on the gasses that pose the greatest danger, the height at which the device must be placed varies.
Micro USB cable
Edge Impulse account
GIT
nRF Connect 3.11.1
For this use-case, as mentioned above, we will be using the Thingy:91, a prototyping development kit created by Nordic Semiconductor. It is packed with sensors, making it a great pick for rapid prototyping and also, equipped with a nRF9160 System-in-Package (SiP) that supports LTE-M, NB-IoT and GNSS, allowing you to add a connectivity layer to any application.
This development board comes equipped with a 64 MHz Arm® Cortex®-M33 CPU that is great for running TinyML models on the edge used to detect various phenomena, more specific for our use case, dangerous gas leaks.
Because the board has all the required sensors embedded on it, there is no need for extra wiring. It’s enough to connect the board to a computer and start building the machine learning model.
As for the deployment phase, it comes down to the specific environment in which the system will be deployed. The Thingy:91 weights under 100g so it can be mounted on hard surfaces using regular adhesives or, if the use case allows for it, it can be mounted using screws.
To build the machine learning model that will be used to detect dangerous leaks in environments characteristic to the oil and gas industry, we will be using the Edge Impulse platform. Register a free account and create a new project. Remember to give it a representative name and select Something else when asked what kind of data will be used to build the project.
The Nordic Thingy:91 prototyping board is fully supported by the Edge Impulse platform, meaning that you will be able to sample data, build the model and deploy it back on the device straight from the platform, without the need to build any firmware. This is a great time saver because it allows users to optimize their models before having to create custom firmware for the target device
To connect the device to Edge Impulse, download nRF connect 3.11.1 and nRF command line tools from the official sources and install them.
If you are going to be using a Linux computer for this application, make sure to run the following command as well:
Afterwards, download the official Edge Impulse Nordic Thingy:91 firmware and extract it.
Next up, make sure the board is turned off and connect it to your computer. Put the board in MCUboot mode by pressing the multi-function button placed in the middle of the device and with the button pressed, turn the board on.
Next, launch the Programmer application in nRF Connect, select your board in the left side of the window, drag and drop the firmware.hex
file in the Files area, make sure Enable MCUboot is enabled and press Write.
When prompted with the MCUboot DFU window, press Write and wait for the process to be finished.
If you struggle at any point of this process, Edge Impulse has great documentation on this subject.
Now, power cycle the board by turning it off and on again, this time without pressing the middle button, launch a terminal and issue the following command:
You will be prompted with a message to insert your username and password and then you will be asked to select which device you would like to connect to.
You may notice that the Thingy:91 exposes multiple UARTs. Select the first one and press ENTER.
By navigating to the Devices tab in your Edge Impulse project, you will notice that your device shows up as connected.
To create this data set, we have exposed the gas sensor to various gasses that might be encountered in environments specific to Oil and Gas industries like high concentrations of CO2, CO, and Isopropanol.
To simulate increased concentrations of CO2, we have used a simple set-up that employs baking soda (Sodium Bicarbonate) and vinegar (Acetic Acid). By combining those 2 elements in a confined environment, the CO2 resulting from their reaction, being heavier than air, would spill from the container in which the reaction takes place and get picked up by the Nordic Thingy:91.
To simulate an increased concentration of carbon monoxide, we have exposed the Thingy:91 board to wood smoke. Smoke is fundamentally a complex mixture of fine particles and compounds in a gaseous state like polycyclic aromatic hydrocarbons, nitrogen oxides, sulfur oxides, and carbon monoxide.
Finally, to get a reading specific to an alcohol leakage, we have exposed the gas sensor to a bottle of 97% concentration Isopropanol. Being a very volatile compound, it quickly evaporates and is easily picked up by VOC sensors.
With the device connected, head over to the Data acquisition tab, select Environmental as the sensor that will be used for acquiring data, set the data acquisition frequency to 1Hz and start recording data.
For this application we will be defining 2 classes, labeled “Gas_Leak” and “Normal”. Neural networks do not know what to do with data it has never seen before and will try to classify it in one of the predefined ones. This is why it’s important to define a “Normal” class that contains readings specific to usual conditions in which the system will be deployed, so as to avoid triggering a false positive reading in the detection algorithm.
Keep in mind that machine learning heavily relies on the quantity and quality of data, so when defining a new class make sure to have at least 2.5 minutes of data for it.
With all the data gathered, it’s time to split the data into 2 categories: Training and Testing data.
Edge impulse has automated this process and all you have to do is press the exclamation mark near the Train/Test split section. What we are aiming for is 80% Training data and 20% Testing data.
The dataset used to create this model can be downloaded here.
It's time to design the Impulse now that the dataset is available.
The process of taking raw data from the dataset, pre-processing it into manageable chunks called "windows," extracting the relevant features from them using digital signal processing algorithms, and then feeding them into a classification neural network to determine whether any anomaly in the running regime of the machinery is detected is set up at this point.
An “Impulse” contains all the processes mentioned above in a manageable and easily configurable structure.
To create a gas sensing model, we will be using a 2000ms Window Size, with a Window Increase of 1000ms and a sampling frequency of 1Hz. This will be passed through a Raw Data processing block with only the “gas res” dimensions checked and then, fed into a Classification Neural Network.
Once you click “Save impulse”, you will notice that every block can be configured by clicking on its name, under the “Impulse Design” submenu.
The Raw data block may be the simplest of the processing blocks, as it has only one parameter that can be modified, namely the “Scale axes” that we will set to 10. On the top side of the screen you can see the time-domain representation of the selected sample.
When you are done configuring the block and exploring the dataset, click on Save parameters.
There are multiple parameters that can be configured in the NN classifier tab that will influence the training process of the classifier neural network. Before configuring those, it is worth understanding how this training process works. Fundamentally, a random value between 0 and 1 is initially assigned to the weight of a link between neurons. Once the training process starts, the neural network is fed the Training dataset defined in the data acquisition phase and the classification output is compared to the correct results. The algorithm then adjusts the weights assigned to the links between neurons and then compares the results once more. This process is repeated for a number of epochs, defined by the Number of training cycles parameter and the Learning rate defines how much the weights are varied each epoch.
During this training process, the neural network may become overtrained and will start to pick up measurement artifacts in the data as the defining feature of a class, instead of looking for the underlying patterns in the data. This process is called overfitting and it's the reason why the performance of a neural network must be evaluated on real world data.
When you click “Start training”, the process will be assigned to a cluster and once the computation ends, you will be presented with the performance of the Neural Network, tested on a percent of samples from the Training dataset held over for validation purposes. The dimension of this datapool is defined by the Validation set size parameter.
When building a machine learning model, what we aim for is a high Accuracy and a low Loss. Accuracy is the percentage of predictions for a given sample where the predicted value coincides with the actual value, and Loss is the total of all errors made for all samples in the validation set.
The confusion matrix presents the percent of samples that were miscategorized and the Data explorer offers a visual representation of the classified samples. It is clearly noticeable that the amount of “Normal” samples wrongly classified as Gas_Leak is greater than the number of Gas_Leak data points categorized as “Normal”. For this particular application, this is not a problem because the model will lean towards triggering a false positive and warn the user that there might be a problem, rather than passing a real gas leak as normal environmental conditions.
The Model Testing tab allows the user to see how the neural network fares when presented with data it has not seen before. Navigate to this tab and click Classify All. Edge Impulse will then feed all the data in the Testing data pool to the neural network and present you with the classification results and the performance of the model, just like during the training process.
If your model manifests low performance when met with unseen data, there are various things you can do. First and foremost, the best thing you can do to increase the performance of the model is to give it more data. Neural Networks need a plentiful and balanced data set to be properly trained.
In our case, the dataset was large but we managed to increase the performance of the model by increasing the number of training cycles and the learning rate of the model, a sign that our neural network was not trained enough.
By increasing the number of training cycles to 100 and the Learning rate to 0.001, we observed a jump in performance on unseen data from 95.83% performance to 100% performance.
Finally, it’s time to see how to model fares when deployed on the edge. Navigate to the Deployment tab and select Nordic thingy:91 under the Build firmware tab.
After you click on Nordic Thingy:91, you will be presented with the option of enabling the EON Compiler. It’s worth comparing the difference in the resources used when the compiler is turned on vs when it's disabled. Take in consideration the fact that it might increase on-device performance with the price of reduced accuracy. This is very helpful when deploying models on resource-constrained devices or when battery life is an issue but it may not be worth using when the device has plentiful resources and is affixed to a wall with a power source available.
Once you have decided, click on Build.
Once the process ends, deploy the firmware built by the Edge Impulse platform in the same way you have uploaded the data-forwarder firmware during the “Connecting the device” section.
Afterwards, with the board connected to the computer and turned on, issue the following command in a terminal:
You will be prompted with the results of the classification that is currently running offline on the device.
When you consider the performance of the model when running on the target to be satisfying, Edge Impulse offers its users the ability to export the Impulse as a C++ library that contains signal processing blocks, learning blocks, configurations and the SDK needed to integrate the ML model in a custom application. By choosing this method of deployment you can build applications that can trigger alarms, log data or send notifications remotely by using the connectivity layer provided by the Nordic Thingy:91 platform.
You can find a great guide about how you can Build an application locally for a Zephyr-based Nordic Semiconductor development board in the official Edge Impulse Documentation.
In conclusion, a system built around the Nordic Thingy:91 and Edge Impulse platform is a great way to overcome some of the challenges associated with traditional gas detection systems. By using it as an edge device, it can measure the gas concentration, run a machine learning algorithm and decide if there is a trend in the quantity of dangerous gasses in the air and act out without human intervention. Additionally, by using it as a wearable, it will be able to detect gas leakages in the proximity of the employees that wear it, ensuring that they are not exposed to dangerous levels of gas.
If you need assistance in deploying your own solutions or more information about the tutorial above please reach out to us!
Use a computer vision model running on a Sony Spresense to determine occupancy in rooms, and adapt HVAC zone output accordingly.
Created By: Jallson Suryo
Public Project Link:
GitHub Repo:
A common problem found in HVAC systems is that energy is wasted, because the system uses more energy than necessary, or the system cannot quickly adjust to the changing needs in a dynamic environment. To tackle the problem, we need a system that manages its power intensity based on what is necessary for each zone in real-time for a given environment. The power intensity necessary for each zone can be derived from the following data: number of people, time or duration spent inside, and/or the person's activity.
To overcome this challenge, a Smart HVAC System that can optimize energy consumption by adjusting the power intensity in different zones inside an office or a residential space (zones with more people, more activity, and longer time durations will need more cooling/heating and vice versa) could be created. The zone heat mapping will be generated using data obtained from a Sony Spresense microcontroller (with Edge Impulse's FOMO Machine Learning model embedded) that's mounted like a surveillance camera inside the space.
The project uses Edge Impulse's FOMO to detect multiple objects and its coordinates using a compact microcontroller with an on-board camera (the Sony Spresense). The object detection ML model will use the top view of miniature figures with standing and sitting positions as objects. The data captured will be divided into Training and Test data. Then the Impulse with Image and Object Detection as learning blocks and grayscale color blocks will be created.
The accuracy result for this training and test model is above 90%, so there is a higher degree of confidence when counting the number of objects (persons) and tracking their centroid coordinates.
The ML model is then deployed to the Spresense. The number of objects in each zone is displayed on an OLED display. The Spresense also communicates to an Arduino Nano via I2C which we are using for the fan speed controller.
This system will increase fan intensity on areas/zone that need more cooling/heating, which means more activity/people in a certain zone will increase fan intensity in that zone. The total HVAC power output can also be adjusted based on the total number of people in the space.
The project is a proof of concept (PoC) using a 1:50 scale model with an office interior with several partitions and furniture and miniature figures. The space is divided into 4 zones, and each zone has a small fan installed. The OLED display is used in this PoC to show the output of this simulation.
Sony Spresense, aluminium extrusion frame, and 3D printed miniature model (1:50)
System Diagram, prototyping in breadboard, and Smart HVAC System with custom design PCB
Sony Spresense
Arduino Nano
2x TB6612 Motor drivers
4x DC 5V mini fan 3cm
0.96inch OLED display
Aluminium extrusion as a camera stand
3D printed (case, office interior 1:50 miniature)
Powerbank & battery for Sony Spresense
Edge Impulse Studio
Arduino IDE
In this project we will use a smartphone camera to capture the images for data collection for ease of use. Take pictures from above in different positions with backgrounds of varying angles and lighting conditions to ensure that the model can work under slightly different conditions (to prevent overfitting). Lighting and object size are crucial aspects to ensure the performance of this model.
Note: Keep the size of the objects similar in size in the pictures. Significant difference in object size will confuse the FOMO algorithm.
Choose the Images project option, then Classify Multiple Objects. In Dashboard > Project Info, choose Bounding Boxes for labelling method and Sony Spresense for target device. Then in Data acquisition, click on Upload Data tab, choose your photo files, auto split, then click Begin upload.
Click on Labelling queue tab then start drag a box around an object and label it (person) and Save. Repeat until all images are labelled.
Make sure that the ratio between Training and Test data is ideal, around 80/20.
Once you have the dataset ready, go to Create Impulse and set 96 x 96 as the image width - height (this helps in keeping the ML model small). Then choose Fit shortest axis, and choose Image and Object Detection as learning and processing blocks.
Go to Image parameter section, select color depth as Grayscale, then press Save parameters. Then click on Generate and navigate to Object Detection section, and leave training settings for the Neural Network as it is — in our case the defaults work quite well, then we choose FOMO (MobileNet V2 0.35). Train the model by pressing the Start training button. You can see the progress on the right side.
If everything is OK, then we can test the model. Go to Model Testing on the left, then click Classify all. Our result is above 90%, then we can move on to the next step -- deployment.
The .ino code is a modified version of the Edge Impulse example code for object detection on Spresense. The modification adds capability to display person count on each room to the OLED screen and act as the controller to the Arduino Nano I2C peripheral. The code distinguishes the four rooms using four quadrants and by knowing the X, Y coordinates of the object’s centroid we can locate the person. The Arduino Nano adjusts the fan motor using PWM based on the number of persons present in the room.
Here is a quick prototype video showing the project:
Finally, we have successfully implemented this object detection model on an Sony Spresense, and use the data captured from the camera to automatically control the HVAC system's fan power intensity and display the occupancy number and power meter for each zone. I believe this Proof of Concept project can be implemented in a real-world HVAC system, so that the goal of optimizing room temperature and saving energy can be achieved for a better, more sustainable future.
Use a XIAO ESP32C3 to monitor temperature, humidity, and pressure to help aid in dairy manufacturing processes.
Created By: Kutluhan Aktar
Public Project Link:
As many of us know, yogurt is produced by bacterial fermentation of milk, which can be of cow, goat, ewe, sheep, etc. The fermentation process thickens the milk and provides a characteristic tangy flavor to yogurt. Considering organisms contained in yogurt stimulate the gut's friendly bacteria and suppress harmful bacteria looming in the digestive system, it is not surprising that yogurt is consumed worldwide as a healthy and nutritious food[^1].
The bacteria utilized to produce yogurt are known as yogurt cultures (or starters). Fermentation of sugars in the milk by yogurt cultures yields lactic acid, which decomposes and coagulates proteins in the milk to give yogurt its texture and characteristic tangy flavor. Also, this process improves the digestibility of proteins in the milk and enhances the nutritional value of proteins. After the fermentation of the milk, yogurt culture could help the human intestinal tract to absorb the amino acids more efficiently[^2].
Even though yogurt production and manufacturing look like a simple task, achieving precise yogurt texture (consistency) can be arduous and strenuous since various factors affect the fermentation process while processing yogurt, such as:
Temperature
Humidity
Pressure
Milk Temperature
Yogurt Culture (Starter) Amount (Weight)
In this regard, most companies employ food (chemical) additives while mass-producing yogurt to maintain its freshness, taste, texture, and appearance. Depending on the production method, yogurt additives can include dilutents, water, artificial flavorings, rehashed starch, sugar, and gelatine.
In recent years, due to the surge in food awareness and apposite health regulations, companies were coerced into changing their yogurt production methods or labeling them conspicuously on the packaging. Since people started to have a penchant for consuming more healthy and organic (natural) yogurt, it became a necessity to prepare prerequisites precisely for yogurt production without any additives. However, unfortunately, organic (natural) yogurt production besets some local dairies since following strict requirements can be expensive and demanding for small businesses trying to gain a foothold in the dairy industry.
After perusing recent research papers on yogurt production, I decided to utilize temperature, humidity, pressure, milk temperature, and culture weight measurements denoting yogurt consistency before fermentation so as to create an easy-to-use and budget-friendly device in the hope of assisting dairies in reducing total cost and improving product quality.
Even though the mentioned factors can provide insight into detecting yogurt consistency before fermentation, it is not possible to extrapolate and construe yogurt texture levels precisely by merely employing limited data without applying complex algorithms. Hence, I decided to build and train an artificial neural network model by utilizing the empirically assigned yogurt consistency classes to predict yogurt texture levels before fermentation based on temperature, humidity, pressure, milk temperature, and culture weight measurements.
Since XIAO ESP32C3 is an ultra-small size IoT development board that can easily collect data and run my neural network model after being trained to predict yogurt consistency levels, I decided to employ XIAO ESP32C3 in this project. To collect the required measurements to train my model, I used a temperature & humidity sensor (Grove), an integrated pressure sensor kit (Grove), an I2C weight sensor kit (Gravity), and a DS18B20 waterproof temperature sensor. Since the XIAO expansion board provides various prototyping options and built-in peripherals, such as an SSD1306 OLED display and a MicroSD card module, I used the expansion board to make rigid connections between XIAO ESP32C3 and the sensors.
Since the expansion board supports reading and writing information from/to files on an SD card, I stored the collected data in a CSV file on the SD card to create a data set. In this regard, I was able to save data records via XIAO ESP32C3 without requiring any additional procedures.
After completing my data set, I built my artificial neural network model (ANN) with Edge Impulse to make predictions on yogurt consistency levels (classes). Since Edge Impulse is nearly compatible with all microcontrollers and development boards, I had not encountered any issues while uploading and running my model on XIAO ESP32C3. As labels, I utilized the empirically assigned yogurt texture classes for each data record while collecting yogurt processing data:
Thinner
Optimum
Curdling (Lumpy)
After training and testing my neural network model, I deployed and uploaded the model on XIAO ESP32C3. Therefore, the device is capable of detecting precise yogurt consistency levels (classes) by running the model independently.
Since I wanted to allow the user to get updates and control the device remotely, I decided to build a complementing Blynk application for this project: The Blynk dashboard displays the recent sensor readings transferred from XIAO ESP32C3, makes XIAO ESP32C3 run the neural network model, and shows the prediction result.
Lastly, to make the device as sturdy and robust as possible while operating in a dairy, I designed a dairy-themed case with a sliding (removable) front cover (3D printable).
So, this is my project in a nutshell 😃
In the following steps, you can find more detailed information on coding, logging data on the SD card, communicating with a Blynk application, building a neural network model with Edge Impulse, and running it on XIAO ESP32C3.
Since I focused on building a budget-friendly and easy-to-use device that collects yogurt processing data and informs the user of the predicted yogurt consistency level before fermentation, I decided to design a robust and sturdy case allowing the user to access the SD card after logging data and weigh yogurt culture (starter) easily. To avoid overexposure to dust and prevent loose wire connections, I added a sliding front cover with a handle to the case. Also, I decided to emboss yogurt and milk icons on the sliding front cover so as to complement the dairy theme gloriously.
Since I needed to adjust the rubber tube length of the integrated pressure sensor, I added a hollow cylinder part to the main case to place the rubber tube. Then, I decided to fasten a small cow figure to the cylinder part because I thought it would make the case design align with the dairy theme.
I designed the main case and its sliding front cover in Autodesk Fusion 360. You can download their STL files below.
For the cow figure (replica) affixed to the top of the cylinder part of the main case, I utilized this model from Thingiverse:
Then, I sliced all 3D models (STL files) in Ultimaker Cura.
Since I wanted to create a solid structure for the main case with the sliding front cover representing dairy products, I utilized these PLA filaments:
Beige
ePLA-Matte Milky White
Finally, I printed all parts (models) with my Creality Sermoon V1 3D Printer and Creality CR-200B 3D Printer in combination with the Creality Sonic Pad. You can find more detailed information regarding the Sonic Pad in Step 1.1.
If you are a maker or hobbyist planning to print your 3D models to create more complex and detailed projects, I highly recommend the Sermoon V1. Since the Sermoon V1 is fully-enclosed, you can print high-resolution 3D models with PLA and ABS filaments. Also, it has a smart filament runout sensor and the resume printing option for power failures.
Furthermore, the Sermoon V1 provides a flexible metal magnetic suction platform on the heated bed. So, you can remove your prints without any struggle. Also, you can feed and remove filaments automatically (one-touch) due to its unique sprite extruder (hot end) design supporting dual-gear feeding. Most importantly, you can level the bed automatically due to its user-friendly and assisted bed leveling function.
Creality Sonic Pad is a beginner-friendly device to control almost any FDM 3D printer on the market with the Klipper firmware. Since the Sonic Pad uses precision-oriented algorithms, it provides remarkable results with higher printing speeds. The built-in input shaper function mitigates oscillation during high-speed printing and smooths ringing to maintain high model quality. Also, it supports G-code model preview.
As shown in the schematic below, before connecting the DS18B20 waterproof temperature sensor to the expansion board, I attached a 4.7K resistor as a pull-up from the DATA line to the VCC line of the sensor to generate accurate temperature measurements.
To display the collected data, I utilized the built-in SSD1306 OLED screen on the expansion board. To assign yogurt consistency levels empirically while saving data records to a CSV file on the SD card, I used the built-in MicroSD card module and button on the expansion board.
After printing all parts (models), I fastened all components except the expansion board to their corresponding slots on the main case via a hot glue gun.
I attached the expansion board to the main case by utilizing M3 screws with hex nuts and placed the rubber tube of the integrated pressure sensor in the hollow cylinder part of the main case.
Then, I placed the sliding front cover via the dents on the main case.
Finally, I affixed the small cow figure to the top of the cylinder part of the main case via the hot glue gun.
Since I focused on building an accessible device, I decided to create a complementing Blynk application for allowing the user to display recent sensor readings, run the Edge Impulse neural network model, and get informed of the prediction result remotely.
Since Blynk allows the user to adjust the unit, data range, and color scheme for each widget, I was able to create a unique web user interface for the device.
Temperature Gauge ➡ V4
Humidity Gauge ➡ V12
Pressure Gauge ➡ V6
Milk Temperature Gauge ➡ V7
Weight Gauge ➡ V8
Switch Button ➡ V9
Label ➡ V10
After completing designing the web user interface, I tested the virtual pin connection of each widget with XIAO ESP32C3.
Since the XIAO expansion board supports reading and writing information from/to files on an SD card, I decided to log the collected yogurt processing data in a CSV file on the SD card without applying any additional procedures. Also, I employed XIAO ESP32C3 to communicate with the Blynk application to run the neural network model remotely and transmit the collected data.
https://raw.githubusercontent.com/espressif/arduino-esp32/gh-pages/package_esp32_dev_index.json
Since the provided XIAO ESP32C3 core's assigned pin numbers are not compatible with the expansion board's MicroSD card module, it throws an error on the Arduino IDE while attempting to access the SD card.
Therefore, I needed to change the assigned SS pin to 4 (GPIO4) in the pins_arduino.h file.
To display images (black and white) on the SSD1306 OLED screen successfully, I needed to create monochromatic bitmaps from PNG or JPG files and convert those bitmaps to data arrays.
After setting up XIAO ESP32C3 and installing the required libraries, I programmed XIAO ESP32C3 to collect environmental factor measurements and the culture (starter) amount in order to save them to the given CSV file on the SD card.
Temperature (°C)
Humidity (%)
Pressure (kPa)
Milk Temperature (°C)
Starter Weight (g)
Since I needed to assign yogurt consistency levels (classes) empirically as labels for each data record while collecting yogurt processing data to create a valid data set for the neural network model, I utilized the built-in button on the XIAO expansion board in two different modes (long press and short press) so as to choose among classes and save data records. After selecting a yogurt consistency level (class) by short-pressing the button, XIAO ESP32C3 appends the selected class and the recently collected data to the given CSV file on the SD card as a new row if the button is long-pressed.
Button (short-pressed) ➡ Select a class (Thinner, Optimum, Curdling)
Button (long-pressed) ➡ Save data to the SD card
You can download the AI_yogurt_processing_data_collect.ino file to try and inspect the code for collecting yogurt processing data and for saving data records to the given CSV file on the SD card.
⭐ Include the required libraries.
⭐ Initialize the File class and define the CSV file name on the SD card.
⭐ Define the 0.96 Inch SSD1306 OLED display on the XIAO expansion board.
⭐ Define the temperature & humidity sensor object (Grove), the I2C weight sensor object (Gravity), and the DS18B20 waterproof temperature sensor settings.
⭐ Define monochrome graphics.
⭐ Define the built-in button pin on the expansion board.
⭐ Then, define the button state and the duration variables to utilize the button in two different modes: long press and short press.
⭐ Initialize the SSD1306 screen.
⭐ Initialize the DS18B20 temperature sensor.
⭐ Define the required settings to initialize the temperature & humidity sensor (Grove).
⭐ In the err_msg function, display the error message on the SSD1306 OLED screen.
⭐ Check the temperature & humidity sensor connection status and print the error message on the serial monitor, if any.
⭐ Check the connection status between the I2C weight sensor and XIAO ESP32C3.
⭐ Set the calibration weight (g) and threshold (g) to calibrate the weight sensor automatically.
⭐ Display the current calibration value on the serial monitor.
⭐ Check the connection status between XIAO ESP32C3 and the SD card.
⭐ In the get_temperature_and_humidity function, obtain the measurements generated by the temperature & humidity sensor.
⭐ In the get_pressure function, get the measurements generated by the integrated pressure sensor (Grove).
⭐ Then, convert the accumulation of raw data to accurate pressure estimation.
⭐ In the get_weight function, obtain the weight measurement generated by the I2C weight sensor.
⭐ Then, subtract the container weight from the total weight to get the net weight.
⭐ In the get_milk_temperature function, obtain the temperature measurement generated by the DS18B20 temperature sensor.
⭐ In the home_screen function, display the collected data and the selected class on the SSD1306 OLED screen.
⭐ In the save_data_to_SD_Card function:
⭐ Open the given CSV file on the SD card in the APPEND file mode.
⭐ If the given CSV file is opened successfully, create a data record from the recently collected data, including the selected yogurt consistency level (class), to be inserted as a new row.
⭐ Then, append the recently created data record and close the CSV file.
⭐ After appending the given data record successfully, notify the user by displaying this message on the SSD1306 OLED screen: Data saved to the SD card!
⭐ Detect whether the built-in button is short-pressed or long-pressed.
⭐ If the button is short-pressed, change the class number [0 - 2] to choose among yogurt consistency levels (classes).
⭐ If the button is long-pressed, append the recently created data record to the given CSV file on the SD card.
After uploading and running the code for collecting yogurt processing data and for saving information to the given CSV file on the SD card on XIAO ESP32C3:
🐄🥛📲 The device shows the opening screen if the sensor and MicroSD card module connections with XIAO ESP32C3 are successful.
🐄🥛📲 Then, the device displays the collected yogurt processing data and the selected class number on the SSD1306 OLED screen:
Temperature (°C)
Humidity (%)
Pressure (kPa)
Milk Temperature (°C)
Starter Weight (g)
Selected Class
🐄🥛📲 If the button (built-in) is short-pressed, the device increments the selected class number in the range of 0-2:
Thinner [0]
Optimum [1]
Curdling [2]
🐄🥛📲 If the button (built-in) is long-pressed, the device appends the recently created data record from the collected data to the yogurt_data.csv file on the SD card, including the selected yogurt consistency class number under the consistency_level data field.
🐄🥛📲 After successfully appending the data record, the device notifies the user via the SSD1306 OLED screen.
🐄🥛📲 If XIAO ESP32C3 throws an error while operating, the device shows the error message on the SSD1306 OLED screen and prints the error details on the serial monitor.
🐄🥛📲 Also, the device prints notifications and sensor measurements on the serial monitor for debugging.
To create a data set with eminent validity and veracity, I collected yogurt processing data from nearly 30 different batches. Since I focused on predicting yogurt texture precisely, I always used cow milk in my experiments but changed milk temperature, yogurt culture (starter) amount, and environmental factors while conducting my experiments.
🐄🥛📲 After completing logging the collected data in the yogurt_data.csv file on the SD card, I elicited my data set.
When I completed logging the collected data and assigning labels, I started to work on my artificial neural network model (ANN) to detect yogurt consistency (texture) levels before fermentation so as to improve product quality and reduce the total cost for small dairies.
Since Edge Impulse supports almost every microcontroller and development board due to its model deployment options, I decided to utilize Edge Impulse to build my artificial neural network model. Also, Edge Impulse makes scaling embedded ML applications easier and faster for edge devices such as XIAO ESP32C3.
Even though Edge Impulse supports CSV files to upload samples, the data type should be time series to upload all data records in a single file. Therefore, I needed to follow the steps below to format my data set so as to train my model accurately:
Data Scaling (Normalizing)
Data Preprocessing
As explained in the previous steps, I assigned yogurt consistency classes empirically while logging yogurt processing data from various batches. Then, I developed a Python application to scale (normalize) and preprocess data records to create appropriately formatted samples (single CSV files) for Edge Impulse.
Since the assigned classes are stored under the consistency_level data field in the yogurt_data.csv file, I preprocessed my data set effortlessly to create samples from data records under these labels:
0 — Thinner
1 — Optimum
2 — Curdling
Plausibly, Edge Impulse allows building predictive models optimized in size and accuracy automatically and deploying the trained model as an Arduino library. Therefore, after scaling (normalizing) and preprocessing my data set to create samples, I was able to build an accurate neural network model to predict yogurt consistency levels and run it on XIAO ESP32C3 effortlessly.
If the data type is not time series, Edge Impulse cannot distinguish data records as individual samples from one CSV file while adding existing data to an Edge Impulse project. Therefore, the user needs to create a separate CSV file for each sample, including a header defining data fields.
To scale (normalize) and preprocess my data set so as to create individual CSV files as samples automatically, I developed a Python application consisting of one file:
process_dataset.csv
Since Edge Impulse can infer the uploaded sample's label from its file name, the application reads the given CSV file (data set) and generates a separate CSV file for each data record, named according to its assigned yogurt consistency class number under the consistency_level data field. Also, the application adds a sample number incremented by 1 for generated CSV files sharing the same label:
Thinner.sample_1.csv
Thinner.sample_2.csv
Optimum.sample_1.csv
Optimum.sample_2.csv
Curdling.sample_1.csv
Curdling.sample_2.csv
First of all, I created a class named process_dataset in the process_dataset.py file to bundle the following functions under a specific structure.
⭐ Include the required modules.
⭐ In the init function, read the data set from the given CSV file and define the yogurt consistency class names.
⭐ In the scale_data_elements function, scale (normalize) data elements to define appropriately formatted data items in the range of 0-1.
⭐ In the split_dataset_by_labels function:
⭐ Split data records by the assigned yogurt consistency level (class).
⭐ Add the header defining data fields as the first row.
⭐ Create scaled data records with the scaled data elements and increase the sample number for each scaled data record sharing the same label.
⭐ Then, generate CSV files (samples) from scaled data records, named with the assigned yogurt consistency level and the given sample number.
⭐ Each sample includes five data items [shape=(5,)]:
*[0.2304, 0.7387, 0.34587, 0.4251, 0.421] *
temperature
humidity
pressure
milk_temperature
starter_weight
⭐ Finally, create appropriately formatted samples as individual CSV files and save them in the data folder.
🐄🥛📲 After running the application, it creates samples, saves them under the data folder, and prints generated CSV file names on the shell for debugging.
After generating training and testing samples successfully, I uploaded them to my project on Edge Impulse.
After uploading my training and testing samples successfully, I designed an impulse and trained it on yogurt consistency levels (classes).
An impulse is a custom neural network model in Edge Impulse. I created my impulse by employing the Raw Data processing block and the Classification learning block.
The Raw Data processing block generate windows from data samples without any specific signal processing.
The Classification learning block represents a Keras neural network model. Also, it lets the user change the model settings, architecture, and layers.
According to my experiments with my neural network model, I modified the neural network settings and layers to build a neural network model with high accuracy and validity:
📌 Neural network settings:
Number of training cycles ➡ 50
Learning level ➡ 0.005
Validation set size ➡ 20
📌 Extra layers:
Dense layer (20 neurons)
Dense layer (10 neurons)
After generating features and training my model with training samples, Edge Impulse evaluated the precision score (accuracy) as 100%.
The precision score (accuracy) is approximately 100% due to the modest volume and variety of training samples from different batches. In technical terms, the model trains on limited validation samples. Therefore, I am still collecting data to improve my training data set.
After building and training my neural network model, I tested its accuracy and validity by utilizing testing samples.
The evaluated accuracy of the model is 100%.
After validating my neural network model, I deployed it as a fully optimized and customizable Arduino library.
After building, training, and deploying my model as an Arduino library on Edge Impulse, I needed to upload the generated Arduino library on XIAO ESP32C3 to run the model directly so as to create an easy-to-use and capable device operating with minimal latency, memory usage, and power consumption.
Since Edge Impulse optimizes and formats signal processing, configuration, and learning blocks into a single package while deploying models as Arduino libraries, I was able to import my model effortlessly to run inferences.
After importing my model successfully to the Arduino IDE, I programmed XIAO ESP32C3 to run inferences when the switch button on the Blynk web application is activated so as to detect yogurt consistency (texture) levels before fermentation.
Blynk Switch Button ➡ Run Inference
Also, I employed XIAO ESP32C3 to transmit the collected yogurt processing data to the Blynk application every 30 seconds and send the prediction (detection) result after running inferences successfully.
You can download the AI_yogurt_processing_run_model.ino file to try and inspect the code for running Edge Impulse neural network models and communicating with a Blynk application on XIAO ESP32C3.
You can inspect the corresponding functions and settings in Step 4.
⭐ Define the Template ID, Device Name, and Auth Token parameters provided by Blynk.Cloud.
⭐ Include the required libraries.
⭐ Define the required variables for communicating with the Blynk web application and the virtual pins connected to the dashboard widgets.
⭐ Define the required parameters to run an inference with the Edge Impulse model.
⭐ Define the features array (buffer) to classify one frame of data.
⭐ Define the threshold value (0.60) for the model outputs (predictions).
⭐ Define the yogurt consistency level (class) names:
Thinner
Optimum
Curdling
⭐ Define monochrome graphics.
⭐ Create an array including icons for each yogurt consistency level (class).
⭐ Create the Blynk object with the Wi-Fi network settings and the Auth Token parameter.
⭐ Initiate the communication between the Blynk web application (dashboard) and XIAO ESP32C3.
⭐ In the update_Blynk_parameters function, transfer the collected yogurt processing data to the Blynk web application (dashboard).
⭐ Obtain the incoming value from the switch (button) widget on the Blynk dashboard.
⭐ Then, change the model running status depending on the received value (True or False).
⭐ In the run_inference_to_make_predictions function:
⭐ Scale (normalize) the collected data depending on the given model and copy the scaled data items to the features array (buffer).
⭐ If required, multiply the scaled data items while copying them to the features array (buffer).
⭐ Display the progress of copying data to the features buffer on the serial monitor.
⭐ If the features buffer is full, create a signal object from the features buffer (frame).
⭐ Then, run the classifier.
⭐ Print the inference timings on the serial monitor.
⭐ Obtain the prediction (detection) result for each given label and print them on the serial monitor.
⭐ The detection result greater than the given threshold (0.60) represents the most accurate label (yogurt consistency level) predicted by the model.
⭐ Print the detected anomalies on the serial monitor, if any.
⭐ Finally, clear the features buffer (frame).
⭐ If the switch (button) widget on the Blynk dashboard is activated, start running an inference with the Edge Impulse model to predict the yogurt consistency level.
⭐ Then, change the model running status to False.
⭐ If the Edge Impulse model predicts a yogurt consistency level (class) successfully:
⭐ Display the prediction (detection) result (class) on the SSD1306 OLED screen with its assigned monochrome icon.
⭐ Transfer the predicted label (class) to the Blynk web application (dashboard) to inform the user.
⭐ Clear the predicted label.
⭐ Every 30 seconds, transmit the collected environmental factors and culture amount to the Blynk web application so as to update the assigned widgets for each data element on the Blynk dashboard.
My Edge Impulse neural network model predicts possibilities of labels (yogurt consistency classes) for the given features buffer as an array of 3 numbers. They represent the model's "confidence" that the given features buffer corresponds to each of the three different yogurt consistency levels (classes) [0 - 2], as shown in Step 5:
0 — Thinner
1 — Optimum
2 — Curdling
After executing the AI_yogurt_processing_run_model.ino file on XIAO ESP32C3:
🐄🥛📲 The device shows the opening screen if the sensor and MicroSD card module connections with XIAO ESP32C3 are successful.
🐄🥛📲 Then, the device displays the collected environmental factor measurements and the culture (starter) amount on the SSD1306 OLED screen:
Temperature (°C)
Humidity (%)
Pressure (kPa)
Milk Temperature (°C)
Starter Weight (g)
🐄🥛📲 Also, every 30 seconds, the device transmits the collected yogurt processing data to the Blynk web application so as to update the assigned widgets for each data element on the Blynk dashboard.
🐄🥛📲 If the switch (button) widget is activated on the Blynk dashboard, the device runs an inference with the Edge Impulse model and displays the detection result, which represents the most accurate label (yogurt consistency class) predicted by the model.
🐄🥛📲 Each yogurt consistency level (class) has a unique monochrome icon to be shown on the SSD1306 OLED screen when being predicted (detected) by the model:
Thinner
Optimum
Curdling (Lumpy)
🐄🥛📲 After running the inference successfully, the device also transfers the predicted label (class) to the Blynk web application (dashboard) to inform the user.
🐄🥛📲 Also, the device prints notifications and sensor measurements on the serial monitor for debugging.
As far as my experiments go, the device detects yogurt consistency (texture) levels precisely before fermentation :)
After the fermentation process, I had yogurt batches with the exact consistency (texture) levels predicted by the Edge Impulse neural network model.
By applying neural network models trained on temperature, humidity, pressure, milk temperature, and culture weight measurements in detecting yogurt consistency (texture) levels, we can achieve to:
🐄🥛📲 improve product quality without food additives,
🐄🥛📲 reduce the total cost for local dairies,
🐄🥛📲 incentivize small businesses to produce organic (natural) yogurt.
[^1] Good Food team, Yogurt, BBC Good Food, https://www.bbcgoodfood.com/glossary/yogurt-glossary
[^2] Metabolism Characteristics of Lactic Acid Bacteria and the Expanding Applications in Food Industry, Front. Bioeng. Biotechnol., 12 May 2021, Sec. Synthetic Biology, https://doi.org/10.3389/fbioe.2021.612285
Detect harmful gases with machine learning and an Arduino Nano 33 BLE plus gas sensor.
Created By: Roni Bandini
Public Project Link:
Industries working with chemicals are always subject to leaks that could harm workers. A Machine Learning model could be trained to identify subtle relationships between multiple gas readings to spot custom leaks.
Arduino BLE 33 Sense
MiCS-4514 gas sensor
5V Cooler
Oled screen 128x32
Buzzer
3d printed parts
DC female connector
5v Power Supply
This project uses MiCS-4514 multi gas sensor. The sensor is able to detect the following gases:
Methane (CH4) (1000–25000) PPM
Ethanol (C2H5OH) (10-500) PPM
Hydrogen (H2) (1-1000) PPM
Ammonia (NH3) (1-500) PPM
Carbon Monoxide (CO) (1- 1000) PPM
Nitrogen Dioxide (NO2) (0.1-10) PPM
The device will read the gas sensor x times for y seconds and then calculate min, max and average. Those values will be forwarded to the Machine Learning model for inference and then a score will be obtained for “harmful” gas 1 or “regular” gas 2.
Upload to the Arduino BLE 33 Sense the acquisition script. Place the sensor unit close to the gas or substance and open serial monitor. You should see there multi gas sensor values as CSV records. Uncheck timestamp, then copy and paste the serial monitor screen into a text file. Save that file as harmful.csv
and add this header:
timestamp, CO2avg, C2H5OHavg, H2avg, NH3avg, CO2min, C2H5OHmin, H2min, NH3min, CO2max, C2H5OHmax, H2max, NH3max
Repeat the procedure for all the gases to be detected.
Go to Impulse Design, Create Impulse.
In Times Series Data, use 1500ms Windows Size and 0,6 frequency.
Processing block will be raw data with all axis checked. For classification use Keras with 2 output features: regular and harmful.
In Raw Data you can see all values for regular and harmful inside every windows size. Then you have to click Generate Features.
For NN Classifier use 60 training cycles, 0.0005 Learning Rate, Validation 20 and Autobalance dataset. Add an extra layer Dropout Rate 0.1 Click Start Training and check if you get good accuracy.
If you are ok with results you can go to Model Testing and check the performance with new data. If there are lots of readings with wrong classification you should check again data acquisition procedure.
Go to Deployment. Select Arduino Library and save the zip file. Now go to Arduino IDE, Sketch, Include Library, Add Zip library and select the downloaded zip file.
Connect the Arduino to the computer with the USB cable and Upload the code. Please note that it will take long minutes to upload.
If you want to change calibration time in minutes, use #define CALIBRATION_TIME 3
If your OLED screen has a different I2C address, use #define SCREEN_ADDRESS 0x3C
If you want to use another pin for the Buzzer # define pinBuzzer 2
If you want to use more readings for min, max and average int measuresNumber=4;
If you want to change the measure timeframe int measuresTimeFrame=1500; // 1.5 seconds
If you want to change the percentage to identify the gases float scoreLimit=0.8;
After displaying Bhopal and Edge Impulse logos, the unit will start the calibration phase. During this phase do not place any substance or gas under the sensor unit. It should read normal air conditions. As soon as this step is finished you can put the leaked substance under the sensor unit and it should be detected in 1.5 seconds. Why 1.5 seconds? During that period 4 readings will be made to obtain mix, max and average for all gases. That information will be forwarded to the model and a classification will be returned.
The prototype is able to detect normal air conditions, a regular gas and a harmful gas.
Estimate the CO2 level in an indoor environment by counting the people in the room using TinyML.
Created By: Swapnil Verma
Public Project Link:
It has been almost two and half years since the COVID-19 pandemic started. After multiple vaccines and numerous tests, we are slowly going back to our old life; for me, it's going back to the office, seeing people and organising face to face meetings (along with video calls, of course). Even though we are going back to our old lives, COVID is far from over, and to prevent and monitor infection, we have specific arrangements in place. One such arrangement is CO2 monitors in an indoor environment. One study suggests that we can predict the infection risk by observing a CO2 level in an indoor environment[1]. A higher level of CO2 means poor ventilation and/or higher occupancy, thus a higher infection risk.
Can we predict a higher infection risk using any other technique? Let us explore our options.
My solution uses a TinyML based algorithm to detect and count the people in an indoor environment. The algorithm will be deployed on a microcontroller. The microcontroller will capture an image or stream of images using a camera and then perform inference on the device to count people.
The device can record the occupancy level locally or send it to a remote machine, possibly a server, for further evaluation. After counting the number of people in an indoor environment, we can do all sorts of things. For example, we can calculate the approximate CO2 level in the room, the distance b/w people to predict the infection risk [2], etc. In this project, I will focus on CO2 level estimation.
The hardware I am proposing for this project is pretty simple. It consists of
An Arduino Portenta H7
An Arduino Portenta Vision Shield
In this project, the dataset I am using is a subset of the PIROPO database [3].
PIROPO Database - https://sites.google.com/site/piropodatabase/
The dataset contains multiple sequences recorded in the two indoor rooms using a perspective camera.
The original PIROPO database contains perspective as well as omnidirectional camera images.
I used this feature to label people in the PIROPO images. I then divided the data into training and test sets using the train/test split feature. While training, the Edge Impulse automatically divides the training dataset into training and validation datasets.
The training F1 score of my model is 91.6%, and the testing accuracy is 86.42%. For live testing, I deployed the model by building openMV firmware and flashed that firmware using the OpenMV IDE. A video of live testing performed on Arduino Portenta H7 is attached in the Demo section below.
This section contains a step-by-step guide to downloading and running the software on the Arduino Portenta H7.
Open the ei_object_detection.py and run it in the OpenMV IDE.
This system is quite simple. The Vision shield (or any camera) captures a 240x240 image of the environment and passes it to the FOMO model prepared using Edge Impulse. This model then identifies the people in the image and passes the number of people to the CO2 level estimation function every minute. The function then estimates the amount of CO2 using the below formula.
The average human exhales about 2.3 pounds of carbon dioxide on an average day [4], and the magic number 0.02556 comes by dividing 2.3 by 24x60 (minutes in a day) and converting it into ounces. The equation calculates the amount of CO2 in ounces per minute. The person detection model can be used with any other application for example occupancy detection etc. The system then repeats this process again.
The testing accuracy of this model is 86.4% when tested with the PIROPO dataset. But that is not the final test of this model. It should perform well when introduced to a new environment, and that is exactly what I did. I used this model on the Arduino Portenta H7 with the vision shield to detect myself in my living room. The model has never seen me nor my living room before. Let's see how well it performs.
The calculation of CO2 level is a straightforward task compared to person detection, therefore, in these demos, I have focused only on person detection.
Note: These videos are 240x240 in resolution and are recorded using the OpenMV IDE. The FPS (Frame Per Second) improves when the device is not connected to the OpenMV IDE.
In the above video, the model is doing quite well. It is missing me in some frames but it has got a lock on me most of the time. I was surprised to see it work this well even when I was behind the couch.
In the above test, I am testing how well it detects someone standing far from the camera. To my surprise, it is detecting me well even when I am standing farthest I could be in this room. It is also detecting me when I am a little sideways while walking.
In the above test, I wanted to see the system's performance when I am sitting on a chair. It works excellent when it sees me from my side but it is not detecting me when I am facing the camera. I think it is because, in my training data, all samples where a person is sitting on a chair capture their side profile and not the front profile. It can be improved by using datasets which contain a person sitting on a chair and facing toward the camera.
Looking at the live testing performance of this model, it is clear that the model is working quite well but has some room for improvement. Considering that the inference is performed on a Microcontroller with 240x240 image data, I am happy with the results. As a next step, I will try to improve its capability as well as accuracy by using diverse training data.
The CO2 level estimation is a simple task given the person detection model has good accuracy and repeatability. The next step for this application would be to improve the estimation by also considering the flow of CO2 out of an indoor space.
Detecting the presence of methane using a SiLabs xG24 Dev Kit and a gas sensor.
Created By:
Public Project Link:
GitHub Repository:
Methane is a colorless, odorless gas that is the main component of natural gas. It is also a common by-product of coal mining. When methane is present in high concentrations, it can be explosive. For this reason, methane monitoring is essential for the safety of workers in mines and other workplaces where methane may be present.
There are many different ways to monitor methane levels. Some methods, such as fixed gas monitors, are designed to provide constant readings from a set location. Others, such as personal portable gas monitors, are designed to be carried by individual workers so that they can take immediate action if methane levels rise to dangerous levels.
Most countries have regulations in place that require the monitoring of methane levels in mines and other workplaces. These regulations vary from country to country, but they all have the same goal: to keep workers safe from the dangers of methane gas exposure.
There are many different methane monitoring systems on the market, but choosing the right one for your workplace can be a challenge. There are a few things you should keep in mind when choosing a methane monitor:
The type of work environment: Methane monitors come in a variety of shapes and sizes, each designed for a specific type of work environment. You will need to choose a methane monitor that is designed for use in the type of workplace where it will be used. For example, personal portable gas monitors are designed to be worn by individual workers, while fixed gas monitors are designed to be placed in a specific location.
The size of the workplace: The size of the workplace will determine how many methane monitors you will need. For example, a small mine might only require a few fixed gas monitors, while a large mine might require dozens.
The methane concentration: The level of methane present in the workplace will determine how often the methane monitor needs to be used. For example, in a workplace with a high concentration of methane, the monitor may need to be used more frequently than in a workplace with a low concentration of methane.
Choosing the right methane monitor for your workplace can be a challenge, but it is an important part of keeping your workers safe from the dangers of methane gas.
The Silabs EFR32xG24 Dev Kit is the perfect solution for methane monitoring in mines and other workplaces. It features a Machine Learning (ML) hardware accelerator that can be used to develop custom gas detection algorithms. The Silabs EFR32xG24 comes equipped with a pressure sensor, ambient light sensor, and hall-effect sensor, all of which can be used to further customize the monitoring system.
Micro USB cable
3D printed enclosure
Prototyping wires
Edge Impulse account
Edge Impulse CLI
To use the MQ-4 sensor with the Silabs EFR32xG24 Dev Kit, you will need to connect the sensor to the board as follows:
Connect the VCC pin on the sensor to the 3.3V pin on the board.
Connect the GND pin on the sensor to the GND pin on the board.
Connect the A0 pin on the sensor to the A0 pin on the board.
With the hardware set up, you are ready to begin developing your methane monitoring system.
In the next screen, you should see your device name:
Afterwards, select Auto in the Package Installation Options menu and click Next.
This code sample reads analog values from the Methane sensor connected to Pin 16 of the dev board, which corresponds to PC05 (Pin 5, Port C). If you want to change the pin, you’ll have to update the following lines of code:
Next up, right click on the project name in the Project Explorer menu, click on Run as and select Silicon Labs ARM program. The Device Selection menu will pop up and you’ll have to select your board.
In case this warning pops up click Yes:
To check out the values printed by the dev board, you can use the Arduino IDE serial monitor or Picocom.
To get started, you will need to create an Edge Impulse project. Edge Impulse is a Machine Learning platform that makes it easy to develop custom algorithms for a variety of applications, including methane monitoring.
To create an Edge Impulse project, simply log in or sign up for an account at https://www.edgeimpulse.com/. Once you have an account, click the "Create new project" button on the dashboard.
You will be asked to give your project a name and select a category. For this project, we will be using the "Custom classification" template. Give your project a name and description, then click the "Create project" button.
With your Edge Impulse project created, you are ready to begin developing your methane detection algorithm.
To populate the data pool, connect the board to the computer using a micro-USB cable, launch a terminal and run:
After entering your account information (username and password), you must first choose a project for the device to be assigned to.
You will then be prompted to name the sensor axis that the edge-impulse-data-forwarder
picked up.
If everything went well, the development board will appear on your project's Devices tab with a green dot next to it, signifying that it is online and prepared for data collection.
The first step in developing a Machine Learning algorithm is to collect training data. This data will be used to train the algorithm so that it can learn to recognize the patterns that indicate the presence of methane gas.
To collect training data, you will need to use the methane sensor to take readings in a variety of conditions, both with and without methane gas present. For each reading, you will need to take note of the concentration of methane present, as well as the ambient temperature and humidity.
It is important to collect a variety of data points, as this will give the algorithm a better chance of learning to recognize the patterns that indicate the presence of methane gas. Try to take readings in different places, at different times of day, and in different weather conditions. If possible, it is also helpful to take readings with different people so that the algorithm can learn to recognize the patterns that are specific to each situation.
Now, you will need to configure the sensor settings. For this project, we will be using the following settings:
Sensor: MQ-4
Board sampling rate: 1924 Hz
Data recording length: 10 seconds
With these settings configured, click the "Start sampling" button to begin collecting data. The data will be automatically stored in the Edge Impulse cloud and can be used to train your Machine Learning algorithm.
Machine learning algorithms need to be trained on data that is representative of real-world data that they will encounter when deployed on the edge. For this reason, it is important to split the data into training and testing sets, the first of them being used during the training process of the neural network and the other one will be used to evaluate the performance of said NN.
With the training data collected, you are ready to begin designating your methane detection algorithm. In Edge Impulse, algorithms are designed using a drag-and-drop interface called the "Impulse designer".
To access the impulse designer, click the "Design impulse" button on your project's dashboard. You will be presented with a list of available blocks that can be used to build your algorithm. For this project, we will be using the following blocks:
After you click “Save impulse,” you will notice that each block may be configured by clicking on its name under the “Impulse Design” submenu. The Spectral Analysis block is one of the simplest processing blocks since it only has a few adjustable parameters. On the upper part of the screen, you can see a time-domain representation of the sample that was selected.
So, how does a neural network know what predictions to make? The answer lies in its many layers, where each layer is connected to another through neurons. At the beginning of the training process, connection weights between neurons are randomly determined.
A neural network is designed to predict a set of results from a given set of data, which we call training data. This works by first presenting the network with the training data, and then checking its output against the correct answer. Based on how accurate the prediction was, the connection weights between neurons are adjusted. We repeat this process multiple times until predictions for new inputs become more and more accurate.
When configuring this block, there are multiple parameters that can be modified:
The number of training cycles is defined as the total number of epochs completed in a certain amount of time. Each time the training algorithm makes one complete pass through all of the learning data with back-propagation and modifies the model's parameters as it goes, it is known as an epoch or training cycle (Figure 1).
The learning rate controls how much the model's internal parameters are updated during each step of the training process, or in other words, how quickly the neural network will learn. If the network overfits too rapidly, you can lower the learning rate.
Auto-balance dataset mixes in more copies of data from classes that are uncommon. This function might help make the model more robust against overfitting if you have little data for some classes.
Even though methane leaks in mining operations are a well-known problem with a variety of market-available solutions, conventional threshold-based detection systems have some drawbacks, including the need for remote data processing and the challenges of ensuring connectivity in mining-specific environments.
Edge AI technologies allow users to circumvent those weak points by embedding the whole signal acquisition, data processing and decision making on a single board that can run independently from any wireless network and which can raise an alarm if dangerous trends in the methane levels in the facility are detected.
Demonstrating Sensor Fusion on an Arduino Nano 33 BLE Sense to detect the presence of a fire using TinyML.
Created By: Nekhil R.
Public Project Link:
In order to properly identify a fire situation, a fire detection system needs to be accurate and fast. However, many commercial fire detection systems use simple sensors, so their fire recognition accuracy can be reduced due to the limitations of the sensor's detection capabilities. Existing devices that use rule-based algorithms or image-based machine learning might be unable to adapt to changes in the environment because of their static features.
In this project, we will develop a device that can detect fire by means of sensor fusion and machine learning. The combination of sensors will help to make more accurate predictions about the presence of fire, versus single-sensor monitoring. We will collect data from sensors such as temperature, humidity, and pressure in various fire situations and extract features to build a machine-learning model to detect fire events.
To make this project a reality, we are using an Arduino Nano 33 BLE Sense with Edge Impulse. For data collection, there are two ways to get data samples into the Edge Impulse platform: either through the Edge Impulse CLI, or through a web browser logged into the Edge Impulse Studio.
Collecting data through the web browser is simple and straightforward. To do this, connect the device to your computer and open the Edge Impulse Studio. Press the Connect Using WebUSB button, and select your development board. The limitation of using the web serial integration is that it only works with development boards that have full Edge Impulse support.
The data collection settings for our project are shown below. Using temperature, humidity and pressure as environmental sensors is a good choice, as they are the parameters that change the most in case of fire events. Also, the sampling rate of 12.5 Hz is appropriate as these parameters are slow-moving.
We have only two classes in this project: No Fire and Fire. For the No Fire case, we collected data at different points in the room. For capturing the Fire data, we built a fire using a camp-like setup in my backyard. To make our model robust, we collected data at different points in the area.
13 minutes of data are collected for two labels and split between Training and Testing datasets. Once the data is uploaded, Edge Impulse has a tool called Data Explorer which gives you a graphical overview of your complete dataset.
This tool is very useful for quickly looking for outliers and discrepancies in your labels and data points.
This is our machine learning pipeline, known as an Impulse
These are our Spectral Analysis parameters of Filter and Spectral power. We didn't use any filter for the raw data.
The below image shows the Generated features for the collected data, and we can see that the data is well separated and distinguishable. As you can notice in the case of Fire event, there are actually three clusters of data and it shows the parameters are changing at different points.
After successfully extracting the features from the DSP block, it's time to train the machine learning model.
Here are our Neural Network settings and architecture, which work very well for our data.
After training, we achieved 98% validation accuracy for the data, so the model seems to be working well.
The Confusion matrix is a great tool for evaluating the model, as you can see below, 2.1% of the data samples are misclassified as No Fire.
By checking the Feature explorer we can easily identify the samples which are misclassified. It also shows the time at which the incorrect classification happened. Here is one example.
This machine learning model seems to be working well enough for our project, so let's see how our model performs on unseen data.
When collecting data, some data is set aside and not used in the Training process. This is called Test data, and we can use it now to check how our model performs on unseen data.
The Confusion matrix and Feature explorer show that our model performs very well.
Now let's test the model with some real-world data. For that we need to move onto the Live Classification tab and connect our Arduino using WebUSB once again.
The above sample was recorded the when there is no fire present, and the below sample is recorded when there is a fire.
It looks like real-world data of No Fire and Fire events are well classified, so our model is ready for deployment onto the Arduino.
The complete hardware unit consists of the Arduino Nano 33 BLE Sense, power adapter, and an ESP-01. The ESP-01 is used to add WiFi connectivity to the Arduino. This component handles sending email alerts over a designated WiFi connection. This occurs via serial communication between the Arduino and ESP-01. In order to establish this communication, we first need to upload the necessary code to both the ESP-01 and the Arduino, which can be found in the GitHub repository linked below. Afterwards, we connected the components according to this schematic:
This is the final hardware setup for the project:
Open , login (or create an account first), then create a new Project.
You should have the Arduino IDE installed on your computer for the following step. On the navigation menu, choose Deployment on the left, search for and select Arduino Library in the Selected Deployment box, and then click Build. Once the Edge Impulse Arduino Library is built, downloaded and unzipped, you should download the spresense_camera_smartHVAC_oled.ino code which and place it inside the unzipped folder from Edge Impulse. Once the .ino code is inside Edge Impulse unzipped folder, move it to your Arduino folder on your computer. Now you can upload the .ino code to your Spresense board via the Arduino IDE.
The code for the Arduino Nano peripheral Nano_SmartHVAC_I2C_Peripheral.ino can be .
Additional 3D-printed case design for Sony Spresense and the office configuration for this project can be found in the GitHub repo at :
🎁🎨 Huge thanks to for sponsoring these products:
⭐ XIAO ESP32C3 |
⭐ XIAO Expansion Board |
⭐ Grove - Temperature & Humidity Sensor |
⭐ Grove - Integrated Pressure Sensor Kit |
🎁🎨 Huge thanks to for sponsoring a .
🎁🎨 Also, huge thanks to for sending me a , a , and a .
Before the first use, remove unnecessary cable ties and apply grease to the rails.
Test the nozzle and hot bed temperatures.
Go to Print Setup ➡ Auto leveling and adjust five predefined points automatically with the assisted leveling function.
Finally, place the filament into the integrated spool holder and feed the extruder with the filament.
Since the Sermoon V1 is not officially supported by Cura, download the latest version and copy the official printer settings provided by Creality, including Start G-code and End G-code, to a custom printer profile on Cura.
Since I wanted to improve my print quality and speed with Klipper, I decided to upgrade my Creality CR-200B 3D Printer with the .
Although the Sonic Pad is pre-configured for some Creality printers, it does not support the CR-200B officially yet. Therefore, I needed to add the CR-200B as a user-defined printer to the Sonic Pad. Since the Sonic Pad needs unsupported printers to be flashed with the self-compiled Klipper firmware before connection, I flashed my CR-200B with the required Klipper firmware settings via FluiddPI by following .
If you do not know how to write a printer configuration file for Klipper, you can download the stock CR-200B configuration file from .
After flashing the CR-200B with the Klipper firmware, copy the configuration file (printer.cfg) to a USB drive and connect the drive to the Sonic Pad.
After setting up the Sonic Pad, select Other models. Then, load the printer.cfg file.
After connecting the Sonic Pad to the CR-200B successfully via a USB cable, the Sonic Pad starts the self-testing procedure, which allows the user to test printer functions and level the bed.
After completing setting up the printer, the Sonic Pad lets the user control all functions provided by the Klipper firmware.
In Cura, export the sliced model in the ufp format. After uploading .ufp files to the Sonic Pad via the USB drive, it converts them to sliced G-code files automatically.
Also, the Sonic Pad can display model preview pictures generated by Cura with the Create Thumbnail script.
First of all, I attached XIAO ESP32C3 to . Then, I connected and to the expansion board via Grove connection cables.
Since does not include a compatible connection cable for a Grove port, I connected the weight sensor to the expansion board via a 4-pin male jumper to Grove 4-pin conversion cable.
provides a free cloud service to communicate with supported microcontrollers and development boards, such as ESP32C3. Also, Blynk lets the user design unique web and mobile applications with drag-and-drop editors.
First of all, create an account on and open Blynk.Console.
Before designing the web application on Blynk.Console, install on the Arduino IDE to send and receive data packets via the Blynk cloud: Go to Sketch ➡ Include Library ➡ Manage Libraries… and search for Blynk.
Then, create a new device with the Quickstart Template, named XIAO ESP32C3. And, select the board type as ESP32.
After creating the device successfully, copy the Template ID, Device Name, and Auth Token variables required by the Blynk library.
Open the Web Dashboard and click the Edit button to change the web application design.
From the Widget Box, add the required widgets and assign each widget to a virtual pin as the datastream option.
However, before proceeding with the following steps, I needed to set up on the Arduino IDE and install the required libraries for this project.
To add the XIAO ESP32C3 board package to the Arduino IDE, navigate to File ➡ Preferences and paste the URL below under Additional Boards Manager URLs.
Then, to install the required core, navigate to Tools ➡ Board ➡ Boards Manager and search for esp32.
After installing the core, navigate to Tools ➡ Board ➡ ESP32 Arduino and select XIAO_ESP32C3.
The pins_arduino.h file location: \esp32\hardware\esp32\2.0.5\variants\XIAO_ESP32C3.
Finally, download the required libraries for the temperature & humidity sensor, the I2C weight sensor, the DS18B20 temperature sensor, and the SSD1306 OLED display:
Sensirion arduino-core |
arduino-i2c-sht4x |
DFRobot_HX711_I2C |
OneWire |
DallasTemperature |
Adafruit_SSD1306 |
Adafruit-GFX-Library |
First of all, download the .
Then, upload a monochromatic bitmap and select Vertical or Horizontal depending on the screen type.
Convert the image (bitmap) and save the output (data array).
Finally, add the data array to the code and print it on the screen.
You can inspect as a public project.
First of all, sign up for and create a new project.
Navigate to the Data acquisition page and click the Upload existing data button.
Then, choose the data category (training or testing) and select Infer from filename under Label to deduce labels from CSV file names automatically.
Finally, select CSV files and click the Begin upload button.
Go to the Create impulse page. Then, select the Raw Data processing block and the Classification learning block. Finally, click Save Impulse.
Before generating features for the neural network model, go to the Raw data page and click Save parameters.
After saving parameters, click Generate features to apply the Raw data processing block to training samples.
Finally, navigate to the NN Classifier page and click Start training.
To validate the trained model, go to the Model testing page and click Classify all.
To deploy the validated model as an Arduino library, navigate to the Deployment page and select Arduino library.
Then, choose the Quantized (int8) optimization option to get the best performance possible while running the deployed model.
Finally, click Build to download the model as an Arduino library.
After downloading the model as an Arduino library in the ZIP file format, go to Sketch ➡ Include Library ➡ Add .ZIP Library...
Then, include the IoT_AI-driven_Yogurt_Processing_inferencing.h file to import the Edge Impulse neural network model.
Source code:
3d parts:
This project is powered by a TinyML algorithm prepared using ; therefore, it is not limited to just one type of hardware. We can deploy it on all the as well as on your smartphone!
I imported the subset of the PIROPO database to the Edge Impulse via the tab. This tab has a cool feature called , which uses YOLO to label an object in the image automatically for you.
Training and testing are done using above mentioned PIROPO dataset. I used the architecture by the Edge Impulse to train this model. To prepare a model using FOMO, please follow this .
Clone or download repository.
Follow edge impulse guide to flash the firmware (edge_impulse_firmware_arduino_portenta.bin) on the Arduino Portenta H7 using the OpenMV IDE.
[1]
[2]
[3] PIROPO Database -
[4]
To shield the electronic system from the harsh environment specific to the mining industry, we have designed and 3D printed a case that exposes only the transductor of the methane sensor and offers the possibility of mounting the ensemble on a hard surface. The files for the enclosure can be .
The EFR32xG24 Dev Kit has an on-board USB J-Link Debugger so you’ll need to install the as well as the to be able to program it. Connect the board to your computer using the micro-USB cable and run the Simplicity Studio installer. In the Installation Manager, choose Install by connecting device(s).
Now that you have the setup for the build environment, download the data collection project and open it with Simplicity Studio from File -> Import project.
To ensure your Arduino Nano 33 is properly connected to Edge Impulse, you can that shows how to connect the Nano 33 and reach a point where you are ready to upload your data.
For the Processing block we used Spectral analysis and for the Learning block we used Classification. Other options such as Flatten and Raw Data are also available as Processing blocks. Each Processing block has it's features and uses, if you need to dive into that, you can find .
To learn more about the individual effect these parameters have on your model, you can . This could require a bit of trial and testing to find the optimal settings.
We deployed our model to the Arduino Nano 33 BLE Sense as an Arduino library. More information on the various deployment options , but as we wanted to build some additional capabilities for alerting and notifications, we decided on using the Arduino library download option.
As mentioned, we built a feature that will send a notification to a user if a Fire event detection (Classification) is detected. For sending a push notification to a user, we used the . Please refer this to build your own version.
The entire assets for this project are given in this .
A smart building prototype that can adapt to current weather conditions using a combination of audio classification and sensor fusion.
Created By: Jallson Suryo
Rain and Thunder Sound - https://studio.edgeimpulse.com/public/270172/latest
Weather Conditions - https://studio.edgeimpulse.com/public/274091/latest
GitHub Repo: https://github.com/Jallson/SensorFusion_SmartBuilding
In a natural ventilation system, it is necessary to regulate the conditioning of air, humidity, temperature and light through adjusting window and louver/blind angles. An automatic system that can adapt to current conditions is the key to comfort and energy savings.
Louver / blinds and windows that can adjust opening angles based on rain and thunderstorm sounds, combined with environmental conditions (light intensity, humidity, and temperature) by using two machine learning models in separate MCUs. One device performs sound classification, and the other one performs sensor fusion of environmental conditions.
This project takes advantage of machine learning (ML) to differentiate sounds, and also distinguish weather conditions with a combination of temperature, humidity, and brightness. Acquiring data and building a model can be done directly in the Edge Impulse Studio. The sound data will include variations of rain, thunderstorms, as well as unknown sounds; horns, cars passing by, and the sounds of people, etc. Whereas in environmental conditions the data will be directly in the form of a combination of temperature, humidity, light in sunny dry, sunny humid, comfortable, and overcast conditions.
The result of these two kinds of ML models will be embedded in separate MCUs (Arduino Nicla Voice & Arduino Nano 33 BLE Sense) and will be combined into a customized program to control the opening angle of the window and louver, so that energy efficiency and comfort in room conditions can be achieved optimally and automatically.
This project is a Proof of Concept (PoC) miniature model using acrylic and a styrofoam canopy, with window and louver movement controlled with angle servos. The sensors (temp, humidity, light, and sound) are already contained on the Arduino boards, so it can read the current realistic situation without adding any more specific sensors.
Arduino Nicla Voice
Arduino Nano 33 BLE Sense
3 micro servos (9g)
Charge Booster (Sparkfun) 5V
Battery 3.7V
3mm Acrylic & Styrofoam
Edge Impulse Studio
Arduino IDE
Terminal
This project uses two machine learning models, so we divide our work into two parts:
A) Rain & Thunder Sound, and B) Weather Conditions (Sensor Fusion)
Before we start, we need to install the Arduino CLI and Edge Impulse tooling on our computer. You can follow this guide to get everything installed.
Open studio.edgeimpulse.com in a browser, and sign in, or create a new account if you do not have one. Click on New project, then in Data acquisition, click on the Upload Data icon for uploading .wav
files (e.g. from Pixabay, Mixkit, Zapsplat, etc.). Other methods to collect data are from devices such as a connected smartphone with QR code link, or a connected Nicla Vision with Edge Impulse audio firmware flashed to it. For ease of labelling, when collecting and uploading data, fill in the name according to the desired label, for example rain
, thunder
, or z_unknown
for your background noise data.
Click on a data sample that was collected, then click on the 3 dots to open the menu, and finally choose Split sample. Set the segment length to 2000 ms (2 seconds), or add segments manually, then click Split. Repeat this process until all samples are labelled in 2 second intervals. Make sure the comparison between rain, thunder and unknown data is quite balanced, and the ratio between training and test data is around 80/20.
Choose Create Impulse, set Window size to around 1 sec, then add an Audio (Syntiant) Processing block, and choose Classifier for the Learning block, then Save Impulse. In the Syntiant parameters, choose log-bin (NDP120/200) then click Save. Set the training to around 30 cycles with 0.0005 Learning rate, then click Start training. It will take a short while, but you can see the progress of the training on the right. If the results show a figure of around 90% accuracy upon completion, then we can most likely proceed to the next stage.
Now we can test the model in Live classification, or choose Model testing to test with the data that was set aside earlier (the 80/20 split), and click Classify all. If the result is greater than 80% accuracy, then we can move to the next step — Deployment.
For a Syntiant NDP device like the Nicla Voice, we can configure the posterior parameters (in this case rain and thunder). To run your Impulse locally on the Arduino Nicla Voice, you should select Syntiant NDP120 Library in the Deployment tab. The library will start building and automatically download to your computer once it is complete. Place the downloaded library in your preferred folder/repository on your computer.
To compile and flash the firmware, run:
Windows:
Mac:
Make sure you are in the correct directory and you have Arduino CLI installed on your computer when performing the commands above.
Once you’ve compiled the Arduino Firmware, do the following:
Take the .elf
output generated by the Arduino CLI and change its name to "firmware.ino.elf"
Replace the firmware.ino.elf
from the default audio firmware, which [needs to be downloaded from here] https://docs.edgeimpulse.com/docs/development-platforms/officially-supported-mcu-targets/arduino-nicla-voice
Replace the ei_model.synpkg
Flash the firmware (more details below)
Start with this (in this case using a Mac, and you only need to run this script once):
If you have flashed a different firmware to the NDP120 chip previously, you should run this script:
Now run this script to flash both the MCU and NDP120:
Technically you can just flash only the NDP120 since we are going to upload a new code to the MCU via Arduino IDE anyway:
Now it is time to upload our specific program to the Arduino Nicla Voice via the Arduino IDE. You can find the .ino
code here: https://github.com/Jallson/SensorFusion_SmartBuilding/blob/main/weathersoundfusion_niclavoice.ino
Once downloaded, you can upload the .ino code to your Arduino Nicla Voice using the Arduino IDE.
When rain sound is detected, the built in LED will blink blue and send a 1
via I2C. When thunder is detected, the LED will blink red and send a 2
via I2C.
We need to prepare the Arduino Nano BLE Sense with the proper firmware to connect to the Edge Impulse Studio. Follow the instructions here to flash the firmware: https://docs.edgeimpulse.com/docs/development-platforms/officially-supported-mcu-targets/arduino-nano-33-ble-sense. We also need to build a set-up that can replicate conditions such as sunny dry, sunny humid, comfortable, and overcast. In this case we use an air-conditioned room, hot water, light, and an iron to create the necessary environments.
Start with adding a new project in the Studio, and connect the Nano BLE Sense as described above. After the device is connected, in Data acquisition choose Environmental + Interactional in the sensor dropdown menu,1000ms for sample length, and label it appropriately. Start sampling.
After all data are labelled and captured correctly (e.g. Overcast, Sunny Humid, Sunny Dry, Comfortable), make sure the ratio between Training and Test is ideally around 80/20.
Once you have the dataset ready, go to Create Impulse and set Window size to 1000ms and Window increase to 500ms. Add a Flatten block then choose temperature, humidity, and brightness as the input axes. Add a Classification block with Flatten feature ticked, then Save Impulse, and Save parameters. After that, set Neural Network training to 300 cycles and a 0.0005 Learning rate, then Start the training process.
Our result is above 90%, then we can check with our test data, choose Model testing and Classify all. If the result is also good, then we can move on to next step — Deployment.
Select Arduino library in the Deployment tab. The library will build and it will automatically download an Arduino library .zip
file. Place this zipped file into your Arduino Library folder.
Now download the .ino
code here https://github.com/Jallson/SensorFusion_SmartBuilding/blob/main/weathersoundfusion_nano_ble33.ino.
Once downloaded, you can upload the .ino code to your Arduino Nano 33 BLE Sense using the Arduino IDE.
In this program, when a byte variable containing 1
or 2
is received via I2C, the Arduino Nano 33 BLE Sense will adjust the window and louver accordingly. If there is no rain or thunder, the sensor fusion inference will start running and the louver and window will be controlled based on the inference result; comfortable, overcast, sunny humid, or sunny dry.
Create your own electronics and demo setup by following the Hardware Component diagram above. I made mine by applying servos to the windows and louvers on a miniature canopy made from acrylic and styrofoam sheets. And don't forget to prepare a lamp, iron, a glass of hot water and a fan or air-conditioning if needed.
Finally, we succeeded in implementing the idea by combining sensor fusion and sound classification ML models. The success of the ML model in identifying conditions by combining patterns in humidity, temperature and brightness without carrying out specific boundary numbers is an advantage compared to programmed only methods. The audio classification model running on the Nicla Voice successfully identifies the sound of rain or lightning, and transmits it via I2C to the Nano BLE Sense to move the connected servos. Meanwhile, the sensor fusion model on the Nano BLE Sense can also identify other weather conditions (sunny humid, dry, overcast, comfortable) and move the connected servos that control the opening of windows and louvers.
I believe this PoC project can be implemented in a smart building system, so that comfort and energy savings can be optimized.
Using a RISC-V powered Sipeed Longan Nano to monitor air quality and alert the presence of harmful gases.
Created By: Zalmotek
Public Project Link:
https://studio.edgeimpulse.com/public/110000/latest
GitHub Repository:
Poor air quality in industrial environments can reduce productivity and raise the risk of accidents. That's why it's critical for industrial facilities to regularly evaluate air quality, guaranteeing that their staff is healthy and productive by doing so. Typical Air Quality dimensions that must be monitored include CO, CO2, H2, volatile organic compounds (VOC), and volatile sulphuric compounds, depending on the specific activity that is taking place within the facility.
Moreover, managers may ensure that workers stay healthy at work by establishing suitable ventilation systems that reduce outside pollution to levels that are not harmful to employees while still keeping interior settings clean. When stated concentrations surpass a specific level, a traditional Air Quality monitoring system will sound an alarm. The downside of such a system is that it will only react after the threshold is surpassed, warning employees that they have been exposed to the harmful substance for a period of time.
We have developed a prototype that uses a Sipeed Longan Nano V1.1 with a RISC-V Gigadevice microprocessor and gas sensors to detect trends in the variation of air quality dimensions by creating a Machine Learning model in Edge Impulse and deploying it on the device to trigger an alarm if they are headed towards a critical level. This will allow for swift intervention to prevent the air quality from reaching hazardous levels.
Edge Impulse account
Virtual Studio Code with PlatformIO addon
Edge Impulse CLI
Udev rule ( for Linux Users)
The Sipeed Longan Nano v1.1 is an updated development board based on the Gigadevices GD32VF103CBT6 MCU chip. The board has a built-in 128KB Flash and 32KB SRAM, providing ample space for students, engineers, and geek enthusiasts to tinker with the new-generation RISC-V processors. The board also features a micro USB port, allowing users to easily connect the board to their computer for programming and debugging. In addition, the board has an on-board JTAG interface, making it easy to work with various development tools. Overall, the Sipeed Longan Nano v1.1 is a convenient and affordable option for those who want to explore the world of RISC-V processors. Besides the programming ports and IOs, the development board includes two user-customizable buttons and a small screen making debugging and real-time information really easy to show locally.
The GD32VF103 is a 32-bit general-purpose microcontroller based on a RISC-V core that offers an excellent blend of processing power, low power consumption, and peripheral set. This device operates at 108 MHz with zero wait states for Flash accesses to achieve optimum efficiency. It has 128 KB of on-chip Flash memory and 32 KB of SRAM memory. Two APB buses link a wide range of improved I/Os and peripherals. The device has up to two 12-bit ADCs, two 12-bit DACs, four general 16-bit timers, two basic timers, as well as standard and advanced communication interfaces: up to three SPIs, two I2Cs, three USARTs, two UARTs, two I2Ss, two CANs, and a USBFS. An Enhancement Core-Local Interrupt Controller (ECLIC), SysTick timer, and additional debug features are also intimately tied with the RISC-V processor core.
The gadgets require a 2.6V to 3.6V power source and can function in temperatures ranging from –40°C to +85 °C. Several power-saving modes allow for the optimization of wakeup latency and power consumption, which is an important factor when creating low-power applications.
The GD32VF103 devices are well-suited for a broad range of linked applications, particularly in industrial control, motor drives, power monitor and alarm systems, consumer and portable equipment, POS, vehicle GPS, LED display, and so on.
Features
Memory configurations are flexible, with up to 128KB on-chip Flash memory and up to 32KB SRAM memory.
A wide range of improved I/Os and peripherals are linked to two APB buses.
SPI, I2C, USART, and I2S are among the many conventional and sophisticated communication interfaces available.
Two 12-bit 1Msps ADCs with 16 channels, four general-purpose 16-bit timers, and one PWM advanced timer are included.
Three power-saving modes optimize wakeup latency and energy usage for low-power applications.
More information about this and other GD32 RISC-V Microcontrollers can be found on the official product page.
To keep everything tidy, we have designed and 3D printed support for the development board and the sensor combo. If you have a 3D printer, you can download the files and print it without supports.
Gas sensors are electronic devices that detect and identify different types of gasses. There are a few different ways that gas sensors work but the most common type of gas sensor uses electrochemical cells. This type of sensor creates a small voltage when it comes into contact with certain gasses which is then used to identify the presence and concentration of the gas.
The MQ gas sensor series are based on the Metal Oxide Semiconductor (MOS) technology, and they function by measuring the change in electrical resistance of a metal oxide film when it is exposed to certain gasses. They have been used by makers for quite a while now, and that is advantageous because they are easy to read (most of the time just an analog pin will suffice) and the options of tracked gasses are quite diverse.
Here are the variants we found so far, so you can mix and match them for your own use case:
For our proof of concept we decided to go with a few MQ sensors, and another one from Adafruit that is actually covering a broader range of gasses with just one sensor.
The MiCS-5524 SGX Sensortech is a robust MEMS sensor for detecting indoor carbon monoxide and natural gas leaks, as well as indoor air quality monitoring, breath checker, and early fire detection. This sensor detects CO (1-1000 ppm), Ammonia (1-500 ppm), Ethanol (10-500 ppm), H2 (1-1000 ppm), and Methane/Propane/Iso-Butane (1,000++ ppm), but it cannot tell which gas it has identified. When gasses are identified, the analog voltage rises in accordance with the amount of gas detected. When turned on, the heater consumes around 25-35mA. To save energy, use the EN pin to turn it off (bring it high to 5V to switch off). Simply wait for a second after turning on the heater to ensure that it is fully heated before obtaining readings.
This sensor can detect Alcohol, Benzine, Methane (CH4), Hexane (C₆H₁₄), Liquefied Petroleum Gas (LPG), and Carbon Monoxide (CO), but it has a much higher sensitivity to alcohol than to Benzine.
This sensor can detect Hydrogen (H2), Liquefied Petroleum Gas (LPG), Methane (CH4), Carbon Monoxide (CO), and Alcohol.
This sensor can detect Carbon Monoxide (CO).
All of the sensors map the concentration of the measured gasses to an analog voltage and have to be powered from 3.3 VDC. The following table presents the wiring connections and the schematic depicts the pinout of the Sipeed Longan Nano V1.1.
Sensors --> Board GND (all sensors) --> GND VCC (all sensors) --> 3.3V AO (MQ-3) --> PB1 AO (MQ-5) --> PA7 AO (MQ-8) --> PB0 AO (MiCS 5524) --> PA6
To debug the Longan Nano, we must use a USB to TTL adapter. This will allow us to establish serial communication with the development board and forward the incoming messages to the Edge Impulse platform. You’ll have to wire the board to the adapter as described in the following table.
TTL to USB Converter --> Longan Nano GND --> GND TX --> RX RX --> TX
The Edge Impulse CLI is a suite of tools that enables you to control local devices, synchronize data for devices without an internet connection, and most importantly, collect data from a device over a serial connection and forward it to the Edge Impulse Platform.
Edge Impulse provides comprehensive official documentation regarding the installation process of the Edge Impulse CLI tools.
Let's move on to setting up our development environment.
To program the Sipeed Longan Nano development board we will employ the PlatformIO addon for VS Code, an open-source ecosystem for IoT development. It includes a cross-platform build system, a package manager, and a library manager. It is used to develop applications for various microcontrollers, including the Arduino, ESP8266, Raspberry Pi, and, relevant for our use case, Gigadevice. PlatformIO is released under the permissive Apache 2.0 license, and it is available for a variety of operating systems, including Windows, macOS, and Linux.
Install Visual Studio Code: https://code.visualstudio.com
Open VSCode, go to Extensions (on the left menu), search for PlatformIO IDE, and install the plugin. Wait for the installation to complete and restart VSCode.
Install the GD32V platform definition - click on the PlatformIO logo on the left, click on New Terminal at the bottom left, and execute the following installation command in the terminal window:
If you are a Linux user, you must also install udev
rules for PlatformIO supported boards/devices. You can find a comprehensive guide about how to do that in the official PlatformIO documentation.
With PlatformIO set up, clone the following GitHub repository in your default projects folder.
Click on Files, Open folder, select LonganAnalogRead and open it.
To program the Longan Nano, we have used an Arduino Framework branched off from the official Sipeed documentation, developed and maintained by scpcom, available on GitHub.
Fundamentally, what this firmware does is read the gas sensors wired up to analog pins PA6, PA7, PB0, and PB1 and prints them on a 115200 baud rate serial, separated by comma.
To read the serial output, we have used Picocom, a terminal emulation program. To open up the serial console, run the following command in terminal:
picocom -b 115200 -r -l /dev/ttyUSB0
To exit Picocom, press CTRL+a followed by CTRL+q.
The serial port might not be the same for you but by running the following command, you can find out the correct serial port:
dmesg | grep tty
After we see a properly formatted output in the serial terminal, we must forward it to the Edge Impulse platform.
The first step towards building your TinyML Model is creating a new Edge Impulse Project. Be sure to give it a recognizable name, select Developer as your project type, and click on Create new project.
To assign the device to the newly created Edge Impulse project, run the following command:
edge-impulse-data-forwarder -clean
You will be prompted to fill in the email address and password used to access your Edge Impulse account. The CLI will auto-detect the data frequency and then prompt you to name the sensor axes, corresponding to each measurement and then give a fitting name to the device.
If you navigate to the Devices tab, you will see your newly defined device with a green marker next to it, indicating that it is online and ready for data acquisition.
For this particular use case, we will be training a model to detect 2 dangerous situations that may occur in an automobile painting facility: an alcohol leakage and a methane gas leakage. Both of those can be dangerous and hazardous to employees' health.
Navigate to the Data Acquisition screen. Notice that on the right side of the screen the device is present, with the 4 axes we have previously defined in the terminal and the auto-detected data acquisition frequency. Select a sample length of 10 seconds, give the label a name, and Start sampling.
When building the dataset, keep in mind that machine learning leverages data, so when creating a new class (defined by a label), try to record at least 2-3 minutes of data.
After a sample is collected successfully, it will be displayed in the raw data tab.
Also, remember to collect some samples for the Testing data set in order to ensure a distribution of at least 85%-15% distribution between the Training and Testing set sizes.
After the data collection phase is over, the next step is to create an Impulse. An Impulse takes raw data from your dataset, divides it into digestible chunks called "windows," extracts features using signal processing blocks, and then uses the learning block to classify new data.
For this application, we are going to use a 1 second window, at a data acquisition frequency of 10 Hz and with the Zero-pad data option checked. We will be using a Flatten processing block, that is fitting for slow-moving averages and a Classification (Keras) as a learning block.
Configuring the Flatten block is a straightforward procedure. Leave all the methods checked and the scale axes to default 1 and click on Save Parameters.
Fundamentally, what the Flatten block does is, if the value of Scale Axes is less than 1, the Flatten block rescales the signal's axes first. Then, depending on the number of methods chosen, statistical analysis is done on each window, computing between 1 and 7 characteristics for each axis.
Under the Impulse Design menu, the NN Classifier tab allows us to define several parameters that influence the neural network's training process. For the time being, the Training setting can be left at its default value. Click on the Start Training button and notice how the training process is assigned to a processing cluster.
The training output will be displayed to you once the program is completed. Our goal is to achieve a level of accuracy of over 95%. The Confusion matrix directly beneath it depicts the accurate and wrong responses provided by our model after it was fed the previously acquired data set, in a tabulated form. In our example, if a methane leak happens, there is a 28.6 percent probability that it will be mistaken for an alcohol leak. Due to the fact that such phenomena are hard to simulate in an electronics lab, our accuracy is under 90%, but good enough to illustrate this PoC.
The Data Explorer provides a visual representation of the dataset and it helps in visualizing the misclassified Methane leakage points that are being placed in close proximity to the Alcohol Leakage points.
A great way of going about testing our model is to navigate to the Model Testing tab. You will be presented with the samples stored in the Testing data pool. Click on Classify all to run all this data through your Impulse.
The Model testing tab provides the user the ability to test out and optimize the model before going through the effort of deploying it back on the edge. The possibility of going back and adding Training data, tweaking the DSP and Learning block, and fine-tuning the model shaves off an enormous amount of development time when creating an edge computing application.
Once you are happy with the performance of the TinyML model, it’s time to deploy it back on the edge. Navigate to the Deployment tab, select Arduino library, and click Build.
This will create an Arduino library that encapsulates all the DSP blocks, their configuration and learning blocks. Download and extract the library in the libs
folder of your PlatformIO project.
Next up, let’s build an application that lights up the on-board LED if the system detects with a certainty of over 90% that an Alcohol Leakage has occurred. In a real world situation, instead of lighting up the LED, the system can switch a relay to start an exhaust system or sound an alarm.
By selecting the proper sensors for your use case and training the model accordingly, you may develop an accurate bespoke gas tracker using the methods mentioned above. The Gigadevice processor is a powerhouse, and we believe it is underutilized in this application. However, given the price and capabilities of the development board, it is a good buy, with room to grow for other applications as RISC-V processors gain popularity in industry, academia, and among hobbyists.
While gas sensors are important for ensuring safety in confined spaces and for reducing environmental pollution they have many other places where they can be used besides industry. In the home, gas sensors can be used to detect leaks and to improve energy efficiency. In transportation, gas sensors can be used to monitor engine performance and to reduce emissions. In the wild they can be used to prevent wildfires as a part of an early detection system. The sensors are placed in an area and monitor the air for combustible gasses.
Due to the fact that simple "if" based conditions that trigger when gas concentration pass an arbitrary defined threshold, using an Edge Impulse model may prove beneficial by reducing the reaction time and implicitly, the exposure time of employees in these situations.
If you need assistance in deploying your own solutions or more information about the tutorial above please reach out to us!
Use the SensiEDGE CommonSense board to capture multiple sensor values and perform sensor fusion to identify locations.
Created By: Marcelo Rovai
Public Project Link: https://studio.edgeimpulse.com/public/281425/latest
GitHub Repo: https://github.com/Mjrovai/Sony-Spresense
This tutorial will develop a model based on the data captured with the Sony Spresense sensor extension board, SensiEDGE's CommonSense.
The general idea is to explore sensor fusion techniques, capturing environmental data such as temperature, humidity, and pressure, adding light and VOC (Volatile Organic Compounds) data to estimate what room the device is located within.
We will develop a project where our "smart device" will indicate where it is located among four different locations of a house:
Kitchen,
Laboratory (Office),
Bathroom, or
Service Area
The project will be divided into the following steps:
Sony's Spresense main board installation and test (Arduino IDE 2.x)
Spresense extension board installation and test (Arduino IDE 2.x)
Connecting the CommonSense board to the Spresense
Connecting the CommonSense board to the Edge Impulse Studio
Creating a Sensor Log for Dataset capture
Dataset collection
Dataset Pre-Processing (Data Curation)
Uploading the Curated data to Edge Impulse Studio
Training and testing the model
Deploying the trained model on the Spresense-CommonSense board
Doing Real Inference
Conclusion
You can follow this link for a more detailed explanation.
Installing USB-to-serial drivers (CP210x)
Download and install the USB-to-serial drivers that correspond to your operating system from the following links:
CP210x USB to serial driver (v11.1.0) for Windows 10/11 CP210x USB to serial driver for Mac OS X
If you use the latest Silicon Labs driver (v11.2.0) in a Windows 10/11 environment, USB communication may cause an error and fail to flash the program. Please download v11.1.0 from the above URL and install it.
Install Spresense Arduino Library
Copy and paste the following URL into the field called Additional Boards Managers URLs:
https://github.com/sonydevworld/spresense-arduino-compatible/releases/download/generic/package_spresense_index.json
Install Reference Board:
Select Board and Port
The Board and port selection can also be done by selecting them on the Top Menu:
Install BootLoader
5.1 Select Programmer → Spresense Firmware Updater
5.2 Select Burn Bootloader
During the process, it will be necessary to accept the License agreement.
Run the BLINK sketch on Examples → Basics → Blink.ino
Testing with all the 4 LEDs:
The Spresense Main board has 4 LEDs. The BUILTIN is LED0 (the far right one). But each one of them can be accessed individually. Run the code below:
Main Features:
Audio input/output - 4ch analog microphone input or 8ch digital microphone input, headphone output Digital input/output - 3.3V or 5V digital I/O Analog input - 6ch (5.0V range) External memory interface - microSD card slot
It is important to note that the Spresense main board is a low-power device running on 1.8V (including I/Os). So, installing the main board on the extension board, which has an Arduino UNO form factor and accepts up to 5V on GPIOs, is advised. Besides, the microSD card slot will be used for our Datalog.
The package of the Spresense board has 4 spacers to attach the Spresense main board.
Insert them on the Extension board and connect the main board as below:
Once the Main Board is attached to the Extension Board, insert an SD card (Formatted as FAT32).
Run: Examples → File → read_write.ino under Espressif.
You should see the messages on the Serial Monitor showing that "testing…" was written on the SD card. Remove the SD card and check it on your computer. Note that I gave my card the name DATASET. Usually, for new cards, you will see, for example, NO NAME.
The CommonSense expansion board, produced by SensiEDGE, provides an array of new sensor capabilities to Spresense, including an accelerometer, gyroscope, magnetometer, temperature, humidity, pressure, proximity, ambient light, IR, microphone, and air quality (VOC). As a user interface, the board contains a buzzer, a button, an SD card reader, and a RGB LED.
The CommonSense board also features an integrated rechargeable battery connection, eliminating the necessity for a continuous power supply and allowing finished products to be viable for remote installations where a constant power source might be challenging to secure.
Below is a block diagram showing the board's main components:
Note that the sensors are connected via the I2C bus, except for the digital microphone.
So, before installing the board, let's map the Main Board I2C. Run the sketch: Examples → Wire → I2CScanner.ino under Espressif in the Arduino IDE. On the Serial Monitor, we confirm that there are no I2C devices installed:
Now, connect the CommonSense board on top of the Spresense main board as shown below:
Reconnect the Mainboard to your Computer (use the Spresense Main Board USB connector), and run the I2C mapping sketch once again. As a result, now, 12 I2C devices are found:
For example, for the SGP40 (VOC sensor) address is 0x59, the APDS-9250 (Light sensor) is 0x52, the HTS221 (Temp& Hum sensor) is 0x5F, the LPS22HH (Pressure sensor) is 0x5D, the VL53L1X (Distance sensor) is 0x29, the LSM6DSOX (Acc & Gyro) is 0x6A, the LIS2MDL (Magnetometer) is 0x1E and so on.
We have confirmed that the main MCU recognizes the sensors on the CommonSense board. Now, it is time to access and test them. For that, we will connect the board to the Edge Impulse Studio.
Go to EdgeImpulse.com, create a Project, and connect the device:
Search for supported devices and click on Sony's Spresense:
On the page that opens, go to the final portion of the document: Sensor Fusion with Sony Spresense and SensiEDGE CommonSense, and download the latest Edge Impulse Firmware for the CommonSense board: https://cdn.edgeimpulse.com/firmware/sony-spresense-commonsense.zip.
Unzip the file and run the script related to your Operating System:
And flash your board:
Run the Edge Impulse CLI and access your project:
Returning to your project, on the Devices Tab, you should confirm that your device is connected:
You can select all sensors individually or combined on the Data Acquisition Tab.
For example:
It is possible to use the Studio to collect data online, but we will use the Arduino IDE to create a Datalogger that can be used offline and not connected to our computer. The dataset can be uploaded later as a .CSV file.
For our project, we will need to install the libraries for the following sensors:
VOC - SGP40
Temperature & Humidity - HTS221TR
Pressure - LPS22HH
Light - APDS9250
Below are the required libraries:
APDS-9250: Digital RGB, IR, and Ambient Light Sensor Download the Arduino Library and install it (as .zip): https://www.artekit.eu/resources/ak-apds-9250/doc/Artekit_APDS9250.zip
HTS221 Temperature & Humidity Sensor Install the STM32duino HTS221 directly on the IDE Library Manager
SGP40 Gas Sensor Install the Sensirion I2C SGP40
LPS22HH Pressure Sensor Install the STM32duino LPS22HH
VL53L1X Time-of-Flight (Distance) Sensor (optional*) Install the VLS53L1X by Pololu
LSM6DSOX 3D accelerometer and 3D gyroscope Sensor (optional*) Install the Arduino_LSM6DSOX by Arduino
LIS2MDL - 3-Axis Magnetometer Sensor (optional*) Install the STM32duino LIS2MDL by SRA
*We will not use those sensors here, but I listed them in case they are needed for another project.
The code is simple. On a specified interval, the data will be stored on the SD card with a sample frequency specified on the line:
For example, I will have a new log each 10s in my data collection.
Also, the built-in LED will blink for each correct datalog, helping to verify if the device is working correctly during offline operation.
Here is how the data will be shown on the Serial Monitor (for testing only).
The data logger will capture data from the eight sensors (pressure, temperature, humidity, VOC, light-red, light-green, light-blue, and IR). I have captured around two hours of data (one sample every 10 seconds) in each housing area (Laboratory, Bathroom, Kitchen, and Service Area).
The CommonSense device worked offline and was powered by a 5V Powerbank as shown below:
Here the raw dataset, shown in the SD card:
As a first test, I uploaded the data to the Studio using the "CSV Wizard" tool. I also left it to the Studio to split the data into Train and Test data. Once the TimeStamp column of my raw data was a sequential number, the Studio considered the sampled frequency, 1Hz, which is OK.
For the Impulse, I considered a window of 3 samples (here 3,000 ms) with a slice of 1 sample (1,000 ms). As a Processing Block, "Flatten" was chosen; as this block changes an axis into a single value, it is helpful for slow-moving averages like the data we are capturing. For Learning, we will use "Classification" and Anomaly Detection (this one only for testing).
For Pre-Processing, we will choose as parameters Average, Minimum, Maximum, RMS, and Standard Deviation, applied for each one of the data points. So, the original 24 Raw Features (3 multiplied by 8 sensors) will result in 40 features (5 parameters per each of the original eight sensors).
The final generated features seem promising, with a good visual separation from the data points:
Now, it is time to define our Classification Model and train it. A simple DNN model with 2 hidden layers was chosen, and as main hyper-parameters, 30 Epochs with a Learning Rate (LR) of 0.0005.
The result: a complete disaster!
Let's examine why:
First, all the steps defined and performed in the Studio are correct. The problem is with the raw data that was uploaded. In tasks like sensor fusion, where data from multiple sensors, each with its measurement units and scales, are combined to create a more comprehensive view of a system, normalization and standardization are crucial preprocessing steps in a machine learning project.
So, previously, to upload the data to the Studio, we should "curate the data", or, better, normalize or standardize our sensor data to ensure faster model convergence, better performance, and more reliable sensor fusion outcomes.
In the tutorial "Using Sensor Fusion and Machine Learning to Create an AI Nose", Shawn Hymel explains how to have a sound Sensor Fusion project. In this project, we will follow his advice.
Use the notebook [data_preparation.ipynb](https://github.com/Mjrovai/Sony-Spresense/blob/main/notebooks/Spresence-CommonSense/data_preparation.ipynb)
for data curation, following the steps:
Open the Notebook on Google Colab
Open the File Manager on the left panel, go to the "three dots" menu, and create a new folder named "data"
On the data folder, go to the three dots menu and choose "upload"
Select the raw data .csv files on your computer. They should appear in the Files directory on the left panel
Create four data frames, one for each file:
bath → bathroom - Shape: (728, 9) kit → kitchen - Shape: (770, 9) lab → lab - Shape: (719, 9) serv → service - Shape: (765, 9)
Here is what one of them looks like:
Plotting the data, we can see that the initial data (around ten samples) present some instability.
So, we should delete them. Here is what the final data looks like:
We should proceed with the same cleaning for all 4 data frames.
We should split data into Train and Test at this early stage, because we should later apply the standardization or normalization to Test data with the Train data parameters.
To start, let's create a new column with the corresponding label.
We will put apart 100 data points from each dataset for testing later.
And concatenating each data frame in two single datasets for Train and Test:
We should plot pairwise relationships between variables within a dataset using the function "plot_pairplot()".
Looking at the sensor measurements on the left, we can see that each sensor's data ranges on very different values. So, we need to standardize or normalize each one of the numerical columns. But what technique should we use? Looking at the plot's diagonal, it is possible to see that the data distribution for each sensor does not follow a normal distribution, so Normalization should be the best option in this case.
Also, the data related to the light sensors (red, green, blue, and IR) correlate significantly (the plot appears as a diagonal line). This means that only one of those features should be used (or a combination of them). Leaving them separated will not damage the model; it will only make it a little bigger. But as the model is small, we will leave those features.
We should apply the normalization to the numerical features of the training data, saving as a list the mins and ranges found for each column. Here is the function used to Normalize the train data:
Those same values (train mins and ranges) should be applied to the Test dataset. Remember that the Test dataset should be new data for the model, simulating "real data", meaning we do not see this data during Training. Here is the function that can be used:
Both files will have this format:
The last step in the preparation should be saving both datasets (Train and Test) and also the train mins and ranges parameters to be used during inference.
Save the files to your computer using the option "Download" on the three dots menu in front of the four files on the left panel.
As we did before, we should upload the curated data to the Studio using the "CSV Wizard" tool. Now, we will upload 2 separate files, Train and Data. When we saved the .csv file, we did not include a timeStamp (or count) column, so on the CSV Wizard, we should inform the sampled frequency (in our case, 0.1Hz). I also left it to the Studio to define the labels, informing where they were located (column "class").
For the Impulse, I considered a window of 3 samples (here 3,000 ms) with a slice of 1 sample (1,000 ms). As a Processing Block, "Flatten" was chosen; as this block changes an axis into a single value, it is helpful for slow-moving averages like the data we are capturing. For Learning, we will use "Classification" and Anomaly Detection (this one only for testing).
The main difference now, after we upload the files, is that the total Data collected time will show more than 8 hours, which is correct once I captured around 2 hours in each of my home rooms.
The Window size for the Impulse will now be 30,000 ms, equivalent to 3 samples. We will increase the Window each 1 ms. For Pre-Processing, we will choose as parameters Average, Minimum, Maximum, RMS, and Standard Deviation, applied for each one of the data points. So, the original 24 Raw Features (3 multiplied by eight sensors) will result in 40 features (5 parameters per each of the original eight sensors).
The final generated features is very similar to what we got with the first version (Raw data).
For the Classification Model definition and training, we will keep the same hyperparameters as before. A simple DNN model with 2 hidden layers was chosen and as main hyper-parameters, 30 Epochs with a Learning Rate (LR) of 0.0005.
And the result now was great!
For the Anomaly Detection training, we used all RMS values. Confirm it by testing the model with the Test data, the result was very good again. Seems we have no issue with the Anomaly Detection.
For Deployment, we will select an Arduino Library and a non-optimized (Floating Point) model. Again, the cost of memory and latency is very small, and we can afford it on this device.
To start, let's run the Static Buffer example. For that, we should select one Raw sample as our model input tensor (in our case, a data point from the Service Room (class: serv). This value should pasted on the line:
Connect the Spresesce Board to your computer, select the appropriate port, and upload the Sketch. On the Serial Monitor, you should see the Classification result showing serv with the right score.
Based on the work done by Shawn Hymel, I adapted his code for using our Spresense-CommonSense board. The complete code can be found here: Spresense-Commonsense-inference.ino
Here is the code to be used:
Upload the code to the device and proceed with the inference in the four locations (note: wait around 2 minutes for sensor stabilization):
SensiEDGE's CommonSense board is a good choice for developing machine learning projects that involve multiple sensors. It provides accurate sensor data and can be used for sensor fusion techniques. This tutorial went step by step on a successfully developed model to estimate the location of a device in different rooms of a house using the CommonSense board, Arduino IDE, and Edge Impulse Studio.
All the code and notebook used in this project can be found in the Project Repo: Sony-Spresense.
And the Edge Impulse Studio project is located here: CommonSense-Sensor-Fusion-Preprocessed-data-v2
Build more accurate fire detection machine learning models by combining image and temperature sensor data for greater understanding of an environment.
Created By: Solomon Githu
Public Project Link: https://studio.edgeimpulse.com/public/487566/latest
GitHub Repo: https://github.com/SolomonGithu/Arduino_Nano_33_BLE_Sense_fire_detection_using_sensor_fusion
Sensors are utilized in everything from personal computers, smartphones, cars, airplanes, industrial equipment, even modern fans and refrigerators contain sensors! For some use cases, simple computers are built with a single sensor. For example, a refrigerator will have a temperature sensor, automatic lights will use a motion sensor, a television will use an infrared sensor to receive commands from a remote, etc. However, for advanced use cases, there is need to gather data from multiple sensors so that the system can get a better understanding of the situation. This helps to reduce some uncertainties that come when using an individual sensor.
To understand sensor fusion, let us consider the image below. In the image, we can see two different scenarios. In the first, a lady is walking on the road and there is a billboard which shows a picture of candles. In the next scenario, we see a gentleman walking along a road that has fire. In these two situations, it is clear that the gentleman is walking on a dangerous path since there are "real" flames. The lady will simply look at the stunning candles, and continue walking.
However, there are the AI-driven CCTV cameras in both scenarios. In this case, both cameras have used computer vision to detect the flames. Although flames may be seen in both circumstances, the candle flames are a picture and not an actual fire. Our brains naturally use multiple senses to better understand our surroundings. We can see a fire and determine if it is a real fire or an image, as in this scenario. However, how can computers have such comprehensive understanding? This is where sensor fusion comes in. In this scenario, optical vision cannot be used alone to determine whether or not there is a fire. The computer can use another input such as a temperature data; as a real fire will also have high temperatures, unlike the billboard representation of a fire.
The idea is that each input (camera image and temperature value) has it's own strengths and weaknesses. The camera gives a visual understanding of the environment, and a temperature sensor will give environmental data. This is a complimentary based sensor fusion technique and the two independent sensors will have their outputs combined to give a complete assessment of the situation. If a fire is detected by the camera, and there are high temperatures, then the system can conclude that there is a "real" fire. At the same time, if the system records temperatures of around 500 degrees Celsius or more, it can suggest that there may be a fire nearby as the flames may not be visible but their temperature effect is felt.
One of our greatest fears is the danger of a fire in our homes, workplaces, on our properties, or even while we are inside vehicles. Although devices and algorithms have been created to detect fires, they have their limitations. In some cases, there are limitations that are obtained by using the vision sensors, while in others a smoke detector can fail to detect a fire if the smoke is not reaching the sensor.
To demonstrate sensor fusion, I trained a Machine Learning model to detect if an indoor fire is present using both image and environmental (temperature) data. I used a custom multi-input Convolutional Neural Network (CNN) model to classify if there is a fire or not. To achieve this, I utilized a tensor slicing technique to work with sub-sections of tensors. Each input to the model consists of both an image and a corresponding scalar value (temperature). The model's input tensor is sliced into an image and temperature tensor. The outputs from these two tensors are then combined and processed further to produce an output (classification of the various classes). The tensor slicing technique is advantageous in this case as well because Edge Impulse does not support multi-input models at the time of this writing. Finally, after training and testing the model, I used the Edge Impulse platform to deploy the sensor fusion model to an Arduino Tiny Machine Learning Kit.
The Arduino Nano 33 BLE Sense board turns the onboard RGB LED to green while in a safe environment, and the RGB LED turns red when a fire is detected. The Arduino TinyML Kit has been used to obtain the training data for this project and also run inference. This TinyML Kit has an Arduino Nano 33 BLE Sense and it features a powerful processor, the nRF52840 from Nordic Semiconductors, and a 32-bit ARM Cortex-M4 CPU running at 64 MHz. The Arduino Nano 33 BLE Sense also has onboard sensors for movement, acceleration, rotation, barometric pressure, temperature, humidity, sound, gesture, proximity, color, and light intensity. In addition, the kit also includes an OV7675 camera module and this makes it easy to develop and deploy image processing applications! For this use case, the TinyML Kit is a good choice since it has a camera, a temperature sensor and the ability to run optimized Machine Learning models on the Arduino board.
While this project could have been implemented using a more powerful hardware such as a GPU, AI accelerator, or a CPU, there have been huge advancements in hardware and software ecosystems enabling Machine Learning to be brought to the low-power resource-constrained devices like microcontrollers. Some problems don't need high performance computers to be solved. A small, low-cost and low-power device like an Arduino board can also get the job done!
Sensor fusion on the Arduino Nano 33 BLE Sense has already been supported by Edge Impulse. However, in the Data Acquisition there is no support for the Arduino Nano 33 BLE Sense to acquire data from both a camera and environmental sensors. The reason for this is because sensor fusion is complicated and there are many possible combinations for the sensors. In this case, merging camera and environmental data is a custom sensor fusion scenario. However, we will still use powerful tools from Edge Impulse to get the model performance on various hardware, optimize the model and also package the model as an Arduino library that we can include in our Sketches.
The image below shows the project flow from data acquisition, model training to deployment. In this project, I will also demonstrate how to use multi-input models with the Edge Impulse platform. The project's source files and dataset can be found in this GitHub repository.
Software components:
Arduino IDE
Python
Google Colab
Edge Impulse Studio account
Hardware components:
Arduino TinyML Kit
A personal computer
In this project, the Machine Learning model will be using an image and a temperature value as the inputs. The goal is to train a model to effectively classify if an environment has a fire or not. I created two classes for the different environments: fire
and safe_environment
. That being said, it sounded like an easy task.
First, I created an Arduino sketch for the Arduino Nano 33 BLE Sense. The Arduino code records the room temperature using the onboard HTS221 temperature sensor and prints it via UART. Afterwards, the code captures an image using the OV7675 camera module. While working with the OV767X library, I realized that the code takes a very long time to capture an image. In this case, I modified the nano_33ble_sense_camera.ino
camera example from Edge Impulse's Arduino library deployment to capture an image. The Edge Impulse's Arduino camera code for the OV7675 has a custom driver that makes it faster to get image data from the camera. After an image has been captured, it is then encoded to base64. For this, I utilized the open-source Edge Impulse's Arduino Nano 33 BLE Sense firmware. From the code, I used parts of the take_snapshot
function to take a snapshot, encode it as base64 and print it to UART. With this code, the Arduino Nano 33 BLE Sense constantly samples a snapshot and temperature value, and they are then printed via UART (Serial). Note that it is not a good idea to send Strings via Serial due to memory leaks, but in this case I worked with Strings. The image width and height can be controlled with the variables WIDTH
and HEIGHT
respectively; the default image size is 240x240 pixels. Note that increasing the image dimensions will increase the time that the Arduino board will take to capture an image and also for the Python script to decode the base64 data and save it to a .JPG image.
After the Arduino Sketch, I created a Python script that reads the Serial messages from the Arduino board, processes them, decodes the base64 image data, and saves the image data as a .JPG image while the temperature value is saved in a .CSV file. To collect data for each class (environment) separately, the script can save the images and .CSV file in a folder named as the class. We can control the number of samples to be saved from the Serial data using the variable number_of_samples_to_collect
.
To use the Arduino code and Python scripts to create a dataset, we first upload the Arduino code to an Arduino Nano 33 BLE Sense. Once the code is uploaded, we identify the COM port of the board and update the SERIAL_PORT
variable accordingly in the Python script. Install the Python libraries on your computer using pip install -r requirements.txt
and finally run the Python script with the command python read_and_save_serial_data.py
. The Python script will automatically process the serial data, save photos as .JPG, and store temperature values in a .CSV file. The images are numerically numbered, and their file names are also put in the .CSV file, the same row as the temperature recorded at the moment the photo was taken.
Since fire is dangerous and difficult to control, I used an oven and a candle to collect data. The oven generates temperatures higher than the ambient room temperatures and this can be detected by a temperature sensor. The candles gives a flame which can be optically detected by a camera. Therefore, both sensors compliment each other. I secured the Arduino TinyML kit on a tripod stand and faced it to an oven. For the safe environment (safe_environment class), I had the oven switched off and the candle was not lit. In total, I collected 60 images and 60 temperature values that were ranging between 23 and 27 degrees Celsius. The images below show photos of how the Arduino board was placed next to an oven, an image that was captured and also the .CSV file with the temperature values and the class label.
After collecting the data for the safe environment condition, I then turned on the oven and the candle was also lit. Safety precautions were also practiced and the gas inlet valve to the cooker was turned off! I then ran the Python script again as the Arduino board continued to sample the environment data. In the Python script, I set the dataset_class
variable to "fire"
and this makes the Python script save the images and .CSV file to a folder named fire
. Since the HTS221 is only guaranteed to operate over a temperature range from -40 to +120 degrees Celsius, I did not put the Arduino board inside the oven to prevent overheating and damaging the board. In this case, the board recorded temperatures of 60 to 70 degrees Celsius while it was next to the oven, placed on the oven door.
Below is one of the images and the temperature values captured by the Arduino board when next to a hot oven and flaming candle.
In this data acquisition, I wanted to make the model to understand the parameters that differentiate a safe environment and an environment that has fire. In my first attempt, I had a dataset that had fire temperatures ranging between 65 and 70 degrees Celsius, just as the Arduino board had obtained them. All the images for the fire class had a flame in them. As much as this is a good representation of the situation, it has some limitations. For one, after training the model, I saw that it was too biased with the fire class; any image that was not similar to the ones used for training was classified as a fire; even if the temperature value was as low as 20 degrees Celsius. In this case, I decided to update the dataset for the fire
class. I replaced half of the images and temperature values with the one's obtained in the safe
environment class. In this case, the model was able to better understand the relationship between the two inputs. A fire can be seen but the temperature value being recorded is as low as 20 degrees. In this case, the temperature sensor may not be within a good range of the fire. At the same time in an environment with fire, no flames may be seen but the temperatures may be as high as 70 degrees. The flame may not be detected by the camera but the high temperature can be felt.
Once the Arduino and Python script have been used to gather the data, I developed this notebook on Google Colab to load the dataset and train the Machine Learning model. For this notebook, we only need to set the dataset source and put an API key for an Edge Impulse project. Once these two are set, we can run the entire notebook and the model will be trained, tested and profiled with the Edge Impulse Python SDK. Once the profiling is completed, we can then open the Edge Impulse project and deploy the model.
First, let us go through the notebook. The first operation is to clone the GitHub repository which also has the dataset for this demonstration project. Instead of this dataset, you can uncomment the first cell of the notebook (and comment the git clone
command) to load a custom dataset folder on your Google Drive.
Next, the notebook installs the required dependencies, loads the .CSV files from the fire
and safe_environment
folders to a pandas data frame, defines the image parameters, and loads the images.
After experimenting with various image configurations, I settled with a grayscale image of 32 by 32 pixels. The reason for choosing this configuration is because increasing the image dimensions has several contributions to the performance of a model. A larger image will increase the number of parameters (features) of the model and this will require more memory. Larger images also require more memory usage (RAM) and with the Arduino Nano 33 BLE Sense we only have 1MB of flash memory and 256KB of RAM. The computation time also increases as we increase the size of data to be processed. In my experiments, I found that while using a grayscale image of 48x48 pixels, the model had flash usage of 1.4MB, RAM usage of 333.5KB and processing time of 18981ms (18 seconds). While using a grayscale image of 40x40 pixels, I could not deploy the model to the Arduino Nano 33 BLE Sense because the final inference Sketch overflowed the available flash memory by 132344 bytes. Therefore, I settled on using a grayscale image of 32 by 32 pixels.
For the model architecture, I developed a multi-input neural network to process the image data and an additional temperature (scalar) input. The model starts with a single input layer that combines the image data and the temperature value into one tensor. The input layer is then sliced into two parts: an image tensor that processes the image data, and a temperature tensor that processes the temperature value. The image branch consists of a series of convolutional layers and pooling layers that are applied to the image data, followed by flattening the result into a 1D tensor. A dense layer processes the temperature input in the temperature branch. The outputs from the image processing branch and the temperature processing branch are then concatenated. The concatenated tensor passes through a dense layer with dropout, and finally, an output layer with a softmax activation function for classification. The model is defined with the combined input layer and the final output, and then compiled with the Adam optimizer, categorical cross-entropy loss, and accuracy metric.
This kind of model architecture is known as Tensor slicing. In this case we use tensor slicing to split the tensors and put them back in the right order. This technique is also advantageous in this case as it enables us to integrate the multi-input model with the Edge Impulse platform.
To get an API key for an Edge Impulse project, we can create a new project in the Edge Impulse Studio and then copy the generated API key. Afterwards, we need to paste the API key into the ei.API_KEY
variable in the notebook.
Having gained an understanding of the notebook's structure, on Google Colab, we can run the entire notebook by clicking "Runtime" and then "Run all". The notebook will load the dataset, process the images and temperature values, train the model, test the model and finally profile the model using the Edge Impulse Python SDK. When creating the dataset, I saved some images and temperature values of the two classes in a Test dataset. The test data for the safe_environment
class is in a safe_environment_test folder, and the test data for the fire
class is in a fire_test folder.
After training the model, the model achieved a validation accuracy of 100%. However, this does not imply that the model is perfect! In this case, the features that the model was classifying are simple, only 50 epochs were used, and the dataset had 120 images and 120 temperature values for training. To improve the model, we can add more data, update the model architecture and and increase the number of training cycles. For this demonstration however, I determined this is acceptable.
After testing the model, the notebook uses the Edge Impulse Python SDK for profiling and this enables us to get RAM, ROM and inference times of our model on a wide range of hardware from MCUs, CPUs, GPUs and AI accelerated boards, incredibly fascinating! We can see the performance estimates for the Arduino Nano 33 BLE Sense in the screenshot below. Also, during this profiling, the model is uploaded to the Edge Impulse project. You can clone my public Edge Impulse project and access the model using this link: Fire_detection_sensor_fusion.
When we go to the Edge Impulse project, we will see "Upload model" under "Impulse design". This is because our final global model was uploaded to the project during profiling.
We first need to configure some parameters in the Edge Impulse project. Click "Upload model" and a new interface will open on the right side of the page. Here, we need to select "Other" for the model input. Next, we select "Classification" for model output since this is a classification model. Finally, in the output labels input box, we enter fire, safe_environment
. Click "Save model" to finish the configuration.
When the Google Colab notebook tests the model, it saves an array of the test image data and temperature value to a text file named input_features.txt
which can be seen in the notebook files. We can copy the contents of the text file and paste them in the Edge Impulse project to test our model on the platform. In the screenshot below, we can see that the model classified the features to belong to the fire
class and this was the correct classification.
Once the model has been tested in the Edge Impulse Studio, we click "Deployment" and select Arduino library in "Search deployment options". This packages the Machine Learning model into a single package and we can include this in Arduino sketches to run the model locally. One fascinating feature that the Edge Impulse Studio gives us is the ability to optimize the model. In this case, we can use the EON Compiler. This is a powerful tool that compiles machine learning models into more efficient and hardware-optimized C++ source code. The feature supports a wide variety of neural networks trained in TensorFlow or PyTorch, and a large selection of Machine Learning models trained in scikit-learn, LightGBM or XGBoost. The EON Compiler also runs more models than other inferencing engines, while saving up to 65% of RAM usage. In my case, Tensorflow Lite model optimization made the model to use 174.7KB of RAM and 695.1KB of flash while the EON Compiler reduced the model to just 143.5KB of RAM and 666.6KB of flash use. The accuracy is the same, but the RAM consumption reduces by 18%! After selecting the EON Compiler model optimization, we can click "Build" and this will download a .zip file to the computer.
After the Arduino library has finished downloading, we can open the Arduino IDE and install the zipped library. Afterwards we open the inference Sketch in the Arduino IDE. Feel free to clone the GitHub repository or copy and paste the code to an Arduino Sketch on your computer. In the inference Sketch, we need to ensure that the variables EI_CAMERA_RAW_FRAME_BUFFER_COLS
, EI_CAMERA_RAW_FRAME_BUFFER_ROWS
, WIDTH
, and HEIGHT
have the same image dimensions as the one used in the model training. Finally, we can upload the inference Sketch on the Arduino Nano 33 BLE Sense. Once the code is uploaded, the Arduino board will record the room temperature, capture an image, and then classify if the environment is safe or has a fire. The inference Sketch follows a similar operation as the data collection code. The main difference in this case is that the data are not printed to Serial. In fact, the inference Sketch is also built from the nano_33ble_sense_camera.ino
example code. I updated the code to also get temperature value and then in the function ei_camera_cutout_get_data
, we can append the temperature value to the buffer that will afterwards be passed to the classifier which in this case is our multi-input model.
With the multi-input model running on the Arduino Nano 33 BLE Sense, the classification time was around 12 seconds. This is due to the fact that the Arduino board only has a 32-bit ARM Cortex-M4 CPU running at 64 MHz. However, for faster classification times, there are many supported hardware devices in the Edge Impulse platform that we could deploy the model to. For instance, the Arduino Nicla Vision microcontroller board combines a powerful STM32H747AII6 Dual ARM® Cortex® M7/M4 IC processor with a 2MP color camera that supports TinyML applications.
Finally, after collecting the dataset, training, and deployment, we have a multi-input sensor fusion application that is running on a small, low-power and resource-constrained Arduino Nano 33 BLE Sense. To test the model on the Arduino board, I placed it back next to the oven. In the first scenario, I had the oven turned off and a candle was in front of the OV7675 camera but it was not lit. In this situation, the model accurately classified that the environment is safe, and the onboard RGB LED turned to green. Afterwards, I turned on the oven to increase the temperatures and lit the candle. In this situation also, the model accurately classified that the environment belongs in the fire
class and the onboard RGB LED turned to red.
In the image below, a candle was placed in front of the Arduino TinyML Kit as it was secured on a tripod. The candle was not lit and the room temperature was around 25 degrees Celsius. In this case, the model was able to accurately classify that the environment was safe and the Arduino RGB LED turned green.
In another case, the candle was lit but since the flame was not big, the HTS221 temperature sensor recorded the room temperature which was still around 25 degrees Celsius. However, since there was a flame, the model correctly classified the environment to be in the fire
class and the Arduino's onboard RGB LED turned red.
From this demonstration project, we have seen how we can train a sensor fusion model and have it optimized to run on a microcontroller such as the Arduino Nano 33 BLE Sense. In this project, the final model accurately classifies if there is fire or not using both a camera and temperature sensor and understand the relationship between the two data sources. This project highlights the key importance of leveraging sensor fusion and Tiny Machine Learning models to enhance fire detection, consequently contributing to the safety and well-being of individuals.
With sensor fusion, we gain more data, which allows for improved decision making. While the technique may appear to be easy, the software and algorithms that enable it are a challenge. This method presents its own set of obstacles, such as computational complexity and sensor compatibility.
The Edge Impulse Bring Your Own Model (BYOM) feature enables us to optimize and deploy our custom pretrained models (TensorFlow SavedModel, ONNX, or TensorFlow Lite) to any edge device using an Edge Impulse project. With sensor fusion, tensor slicing, and this powerful BYOM tool, vast opportunities are opening up for developing advanced Machine Learning models.
Using a DFRobot Firebeetle ESP32 to monitor and make predictions about air quality in an environment.
Created By: Kutluhan Aktar
Public Project Link:
Due to the ever-growing industrialization, forest degradation, and pollution, the delicate balance of ambient gases shifted. Thus, hazardous air pollutants impinge on the human respiratory system detrimentally, in addition to engendering climate change and poisoning wildlife. Even though governments realized that it was incumbent on them to act in order to prevent destructive air contaminants from pervading the ecosystem, we are too far away from obviating human-made air pollutants during the following decades. Therefore, it is still crucial to detect air pollutants to inform people with prescient warnings.
Since some air pollutants can react with each other and spread very rapidly, precedence must be given to detecting highly reactive gases (air contaminants), such as ozone (O3) and nitrogen compounds (NOx, NOy). Thus, in this project, I decided to focus on ozone (O3) and nitrogen dioxide (NO2) concentrations, which denote dangerous air pollution.
In ambient air, nitrogen oxides can occur from diverse combinations of oxygen and nitrogen. The higher combustion temperatures cause more nitric oxide reactions. In ambient conditions, nitric oxide is rapidly oxidized in air to form nitrogen dioxide by available oxidants, for instance, oxygen, ozone, and VOCs (volatile organic compounds). Hence, nitrogen dioxide (NO2) is widely known as a primary air pollutant (contaminant). Since road traffic is considered the principal outdoor source of nitrogen dioxide[^1], densely populated areas are most susceptible to its detrimental effects. Nitrogen dioxide causes a range of harmful effects on the respiratory system, for example, increased inflammation of the airways, reduced lung function, increased asthma attacks, and cardiovascular harm[^2].
Tropospheric, or ground-level ozone (O3), is formed by chemical reactions between oxides of nitrogen (NOx) and volatile organic compounds (VOCs). This chemical reaction is triggered by sunlight between the mentioned air pollutants emitted by cars, power plants, industrial boilers, refineries, and chemical plants[^3]. Depending on the level of exposure, ground-level ozone (O3) can have various effects on the respiratory system, for instance, coughing, sore throat, airway inflammation, increased frequency of asthma attacks, and increased lung infection risk. Some of these detrimental effects have been found even in healthy people, but symptoms can be more severe in people with lung diseases such as asthma[^4].
Since nitrogen dioxide (NO2), ozone (O3), and other photochemical oxidant reactions and transmission rates are inextricably related to air flow, heat, and ambient humidity, I decided to collect the following data parameters to create a meticulous data set:
Nitrogen dioxide concentration (PPM)
Ozone concentration (PPB)
Temperature (°C)
Humidity (%)
Wind speed
After perusing recent research papers on ambient air pollution, I noticed there are very few appliances focusing on collecting air quality data, detecting air pollution levels with machine learning, and providing surveillance footage for further examination. Therefore, I decided to build a budget-friendly and easy-to-use air station to forecast air pollution levels with machine learning and inform the user of the model detection results with surveillance footage consecutively, in the hope of forfending the plight of hazardous gases.
To predict air pollution levels, I needed to collect precise ambient hazardous gas concentrations in order to train my neural network model with notable validity. Therefore, I decided to utilize DFRobot electrochemical gas sensors. To obtain the additional weather data, I employed an anemometer kit and a DHT22 sensor. Since FireBeetle ESP32 is a compact and powerful IoT-purposed development board providing numerous features with its budget-friendly media (camera) board, I decided to use FireBeetle ESP32 in combination with its media board so as to run my neural network model and inform the user of the model detection results with surveillance footage. Due to the memory allocation issues, I connected all sensors to Arduino Mega to collect and transmit air quality data to FireBeetle ESP32 via serial communication. Also, I connected three control buttons to Arduino Mega to send commands to FireBeetle ESP32 via serial communication.
Since the FireBeetle media board supports reading and writing information from/to files on an SD card, I stored the collected air quality data in separate CSV files on the SD card, named according to the selected air pollution class, to create a pre-formatted data set. In this regard, I was able to save and process data records via FireBeetle ESP32 without requiring any additional procedures.
After completing my data set, I built my artificial neural network model (ANN) with Edge Impulse to make predictions on air pollution levels (classes). Since Edge Impulse is nearly compatible with all microcontrollers and development boards, I had not encountered any issues while uploading and running my model on FireBeetle ESP32. As labels, I utilized the empirically assigned air pollution levels in accordance with the Air Quality Index (AQI) estimations provided by IQAir:
Clean
Risky
Unhealthy
After training and testing my neural network model, I deployed and uploaded the model on FireBeetle ESP32 as an Arduino library. Therefore, the air station is capable of detecting air pollution levels by running the model independently without any additional procedures or latency.
Since I focused on building a full-fledged AIoT air station predicting air pollution and informing the user of the model detection results with surveillance footage, I decided to develop a web application from scratch to obtain the detection results with surveillance footage from FireBeetle ESP32 via HTTP POST requests, save the received information to a MySQL database table, and display the stored air quality data with model detection results in descending order simultaneously.
Due to the fact that the FireBeetle media board can only generate raw image data, this complementing web application executes a Python script to convert the obtained raw image data to a JPG file automatically before saving it to the server as surveillance footage. After saving the converted image successfully, the web application shows the most recently obtained surveillance footage consecutively and allows the user to inspect previous surveillance footage in descending order.
Lastly, to make the device as robust and compact as possible while operating outdoors, I designed a metallic air station case with a sliding front cover and a mountable camera holder (3D printable) for the OV7725 camera connected to the FireBeetle media board.
So, this is my project in a nutshell 😃
In the following steps, you can find more detailed information on coding, capturing surveillance footage, building a neural network model with Edge Impulse, running the model on FireBeetle ESP32, and developing a full-fledged web application to obtain the model detection results with surveillance footage from FireBeetle ESP32 via HTTP POST requests.
Since I focused on building a budget-friendly and accessible air station that collects air quality data and runs a neural network model to inform the user of air pollution via a PHP web application, I decided to design a sturdy and compact metallic case allowing the user to access the SD card after logging data, place the air quality sensors, and adjust the OV7725 camera effortlessly. To avoid overexposure to dust and prevent loose wire connections, I added a sliding front cover with a handle to the case. Then, I designed a separate camera holder mountable to the left side of the case at four different angles. Also, I decided to inscribe air pollution indicators on the sliding front cover to highlight the imminent pollution risk.
Since I needed to attach an anemometer to the case to collect wind speed data, I decided to design a semi-convex structure for the case. This unique shape also serves as a wind deflector that protects the air quality sensors from potential wind damage.
I designed the metallic air station case, its sliding front cover, and the mountable camera holder in Autodesk Fusion 360. You can download their STL files below.
Then, I sliced all 3D models (STL files) in Ultimaker Cura.
Since I wanted to create a solid metallic structure for the air station case with the sliding front cover and apply a unique alloy combination complementing the metallic theme, I utilized these PLA filaments:
eSilk Copper
eSilk Bronze
Finally, I printed all parts (models) with my Creality Sermoon V1 3D Printer and Creality CR-200B 3D Printer in combination with the Creality Sonic Pad. You can find more detailed information regarding the Sonic Pad in Step 1.1.
If you are a maker or hobbyist planning to print your 3D models to create more complex and detailed projects, I highly recommend the Sermoon V1. Since the Sermoon V1 is fully-enclosed, you can print high-resolution 3D models with PLA and ABS filaments. Also, it has a smart filament runout sensor and the resume printing option for power failures.
Furthermore, the Sermoon V1 provides a flexible metal magnetic suction platform on the heated bed. So, you can remove your prints without any struggle. Also, you can feed and remove filaments automatically (one-touch) due to its unique sprite extruder (hot end) design supporting dual-gear feeding. Most importantly, you can level the bed automatically due to its user-friendly and assisted bed leveling function.
Creality Sonic Pad is a beginner-friendly device to control almost any FDM 3D printer on the market with the Klipper firmware. Since the Sonic Pad uses precision-oriented algorithms, it provides remarkable results with higher printing speeds. The built-in input shaper function mitigates oscillation during high-speed printing and smooths ringing to maintain high model quality. Also, it supports G-code model preview.
Due to the Arduino library incompatibilities and the memory allocation issues, I decided to connect the electrochemical NO2 sensor, the electrochemical ozone sensor, the anemometer kit, and the DHT22 sensor to Arduino Mega so as to collect the required air quality data. Then, I utilized Arduino Mega to transmit the collected air quality data to FireBeetle ESP32 via serial communication.
Since Arduino Mega operates at 5V and FireBeetle ESP32 requires 3.3V logic level voltage, they cannot be connected with each other directly. Therefore, I utilized a bi-directional logic level converter to shift the voltage for the connections between FireBeetle ESP32 and Arduino Mega.
To display the collected information and notifications, I utilized an SH1106 OLED screen. To assign air pollution levels empirically while saving the collected data to individual CSV files on the SD card, I used the built-in MicroSD card module on the media board and added three control buttons.
After printing all parts (models), I fastened all components except the OV7725 camera to their corresponding slots on the metallic air station case via a hot glue gun. I also utilized the anemometer's screw kit to attach it more tightly to its connection points on the top of the metallic case.
I placed the OV7725 camera in the mountable camera holder and attached the camera holder to the metallic case via its snap-fit joints.
Then, I placed the sliding front cover via the dents on the metallic case.
As mentioned earlier, the mountable camera holder can be utilized to adjust the OV7725 camera at four different angles via the snap-fit joints.
To provide an exceptional user experience for this AIoT air station, I developed a full-fledged web application from scratch in PHP, HTML, JavaScript, CSS, and MySQL. This web application obtains the collected air quality data, the detected air pollution level (class) by the neural network model, and the captured surveillance footage from FireBeetle ESP32 via an HTTP POST request. After saving the received information to the MySQL database table for further inspection, the web application converts the received raw image data to a JPG file via a Python script. Then, the web application updates itself automatically to show the latest received information and surveillance footage. Also, the application displays all stored air quality data with model detection results in descending order and allows the user to inspect previous surveillance footage.
As shown below, the web application consists of three folders and seven code files:
/assets
-- background.jpg
-- class.php
-- icon.png
-- index.css
-- index.js
/env_notifications
-- /images
-- bmp_converter.py
index.php
show_records.php
update_data.php
📁 class.php
In the class.php file, I created a class named _main to bundle the following functions under a specific structure.
⭐ Define the _main class and its functions.
⭐ In the insert_new_data function, insert the given air quality data to the MySQL database table.
⭐ In the get_data_records function, retrieve all stored air quality data from the database table in descending order and return all data parameters as separate lists.
⭐ Define the required MariaDB database connection settings for LattePanda 3 Delta 864.
📁 update_data.php
⭐ Include the class.php file.
⭐ Define the air object of the _main class with its required parameters.
⭐ Get the current date & time and create the surveillance footage file name.
⭐ If FireBeetle ESP32 sends the collected air quality data parameters with the model detection result, save the received information to the given MySQL database table.
⭐ If FireBeetle ESP32 transfers raw image data as surveillance footage via an HTTP POST request to update the server, save the received raw image data as a TXT file to the env_notifications folder.
⭐ Convert the recently saved raw image data (TXT file) to a JPG file by executing a Python script via the terminal through the web application — bmp_converter.py.
You can get more information regarding converting raw image data in the following step.
⭐ After generating the JPG file from the raw image data, remove the converted TXT file from the server.
📁 show_records.php
⭐ Include the class.php file.
⭐ Define the air object of the _main class with its required parameters.
⭐ Obtain all saved air quality information in the database table as different lists for each data parameter and create HTML table rows by utilizing these arrays.
⭐ Get the name of the latest surveillance footage from the database table.
⭐ Then, create a JSON object from the recently generated HTML table rows and the elicited surveillance footage file name.
⭐ Finally, return the recently created JSON object.
📁 index.php
⭐ Create the web application interface, including the HTML table for displaying the stored air quality information with the model detection results in the MySQL database table and image frames for the latest and selected surveillance footage.
You can inspect and download the index.php file below.
📁 index.js (jQuery and AJAX)
⭐ Display the selected surveillance footage (image) on the web application interface via the HTML buttons added to each data record retrieved from the MySQL database table.
⭐ Every 5 seconds, make an HTTP GET request to the show_records.php file.
⭐ Then, decode the retrieved JSON object to obtain the HTML table rows generated from the database table rows and the latest surveillance footage file name.
⭐ Assign the elicited information to the corresponding HTML elements on the web application interface to inform the user automatically.
Since the FireBeetle media board can only generate raw image data due to its built-in OV7725 camera, I needed to convert the generated raw image data to readable image files so as to display them on the web application interface as surveillance footage. Since FireBeetle ESP32 cannot convert the generated raw image data due to memory allocation issues, I decided to convert the captured raw image data to a JPG file via the web application.
Even though PHP can handle converting raw image data to different image file formats, converting images in PHP causes bad request issues since the web application receives raw image data from FireBeetle ESP32 via HTTP POST requests. Hence, I decided to utilize Python to create JPG files from raw image data since Python provides built-in modules for image conversion in seconds.
By employing the terminal on LattePanda 3 Delta, the web application executes the bmp_converter.py file directly to convert images.
📁 bmp_converter.py
⭐ Include the required modules.
⭐ Obtain all raw images transferred by FireBeetle ESP32 and saved as TXT files under the env_notifications folder.
Since the web application requires to access the absolute paths via the terminal to execute the Python script in order to convert images, provide the env_notifications folder's exact location.
⭐ Then, convert each retrieved TXT file (raw image) to a JPG file via the frombuffer function.
⭐ Finally, save the generated JPG files to the images folder.
LattePanda 3 Delta is a pocket-sized hackable computer that provides ultra performance with the Intel 11th-generation Celeron N5105 processor.
Plausibly, LattePanda 3 Delta can run the XAMPP application. So, it is effortless to create a server with a MariaDB database on LattePanda 3 Delta.
After setting the web application on LattePanda 3 Delta 864:
🎈⚠️📲 The web application (update_data.php) saves the information transferred by FireBeetle ESP32 via an HTTP POST request with URL query parameters to the given MySQL database table.
/update_data.php?no2=0.15&o3=25&temperature=25.20&humidity=65.50&wind_speed=3&model_result=Clean
🎈⚠️📲 When FireBeetle ESP32 transmits raw image data via the HTTP POST request, the web application converts the received raw image data to a JPG file by executing the bmp_converter.py file via the terminal.
python "C:\Users\kutlu\New E\xampp\htdocs\weather_station_data_center\env_notifications\bmp_converter.py"
🎈⚠️📲 On the web application interface (index.php), the application displays the concurrent list of data records saved in the database table as an HTML table, including HTML buttons for each data record to select the pertaining surveillance footage.
🎈⚠️📲 The web application updates its interface every 5 seconds automatically via the jQuery script to display the latest received air quality data with the model detection result and surveillance footage.
🎈⚠️📲 Until the user selects a surveillance image (footage), the web application shows the default surveillance icon in the latter image frame.
🎈⚠️📲 When the user selects a surveillance image via the assigned HTML buttons, the web application shows the selected image on the screen for further inspection.
🎈⚠️📲 For each air pollution level (class), the web application changes the row color in the HTML table to clarify and emphasize the model detection results:
Clean ➜ Green
Risky ➜ Midnight Green
Unhealthy ➜ Red
🎈⚠️📲 When the user hovers the cursor over the image frames, the web application highlights the selected frame.
Before proceeding with the following steps, I needed to set up FireBeetle ESP32 on the Arduino IDE and install the required libraries for this project.
https://raw.githubusercontent.com/espressif/arduino-esp32/gh-pages/package_esp32_index.json
To display images (black and white) on the SH1106 OLED screen successfully, I needed to create monochromatic bitmaps from PNG or JPG files and convert those bitmaps to data arrays.
After setting up FireBeetle ESP32 and installing the required libraries, I programmed Arduino Mega to collect air quality data and transmit the collected data to FireBeetle ESP32 via serial communication. As explained in the previous steps, I encountered Arduino library incompatibilities and memory allocation issues when I connected the sensors directly to FireBeetle ESP32.
Nitrogen dioxide concentration (PPM)
Ozone concentration (PPB)
Temperature (°C)
Humidity (%)
Wind speed
Since I needed to assign air pollution levels (classes) empirically as labels for each data record while collecting air quality data to create a valid data set for my neural network model, I utilized three control buttons connected to Arduino Mega so as to choose among classes and transfer data records via serial communication. After selecting an air pollution level (class) by pressing a control button, Arduino Mega sends the selected class and the recently collected data to FireBeetle ESP32.
Control Button (A) ➡ Clean
Control Button (B) ➡ Risky
Control Button (C) ➡ Unhealthy
You can download the AIoT_weather_station_sensor_readings.ino file to try and inspect the code for collecting air quality data and transferring the collected data via serial communication.
⭐ Include the required libraries.
⭐ Define the collect number (1-100) for the electrochemical ozone sensor.
⭐ If necessary, modify the I2C address of the ozone sensor by utilizing its configurable dial switch.
⭐ Define the ozone sensor object.
⭐ If necessary, modify the I2C address of the electrochemical NO2 sensor by utilizing its configurable dial switch.
⭐ Define the NO2 sensor object.
⭐ Define the SH1106 OLED display (128x64) settings.
⭐ Define monochrome graphics.
⭐ Define the air pollution level (class) names and color codes.
⭐ Define the DHT22 temperature and humidity sensor settings and the DHT object.
⭐ Define the anemometer kit's voltage signal pin (yellow).
⭐ Initialize the hardware serial port (Serial1) to communicate with FireBeetle ESP32.
⭐ Initialize the SH1106 OLED display.
⭐ In the err_msg function, show the error message on the SH1106 OLED screen.
⭐ Check the electrochemical ozone sensor connection status and set its data-obtaining mode (active or passive).
⭐ Check the electrochemical NO2 sensor connection status and set its data-obtaining mode (active or passive).
⭐ Activate the temperature compensation feature of the NO2 sensor.
⭐ Initialize the DHT22 sensor.
⭐ If the sensors are working accurately, turn the RGB LED to blue.
⭐ Wait until electrochemical gas sensors heat (warm-up) for 3 minutes.
⭐ In the collect_air_quality_data function:
⭐ Collect the nitrogen dioxide (NO2) concentration.
⭐ Collect the ozone (O3) concentration.
⭐ Get the temperature and humidity measurements generated by the DHT22 sensor.
⭐ Calculate the wind speed (level) [1 - 30] according to the output voltage generated by the anemometer kit.
⭐ Combine all collected data items to create a data record.
⭐ In the home_screen function, display the collected air quality data on the SH1106 OLED screen.
⭐ In the data_screen function, display the given class icon on the SH1106 OLED screen and turn the RGB LED to the given class' color code.
⭐ If one of the control buttons (A, B, or C) is pressed, add the selected air pollution level to the recently generated data record and send it to FireBeetle ESP32 via serial communication. Then, notify the user according to the selected class.
⭐ In the run_screen function, inform the user when the collected data items are transferred to FireBeetle ESP32 via serial communication.
⭐ Every minute, transmit the collected air quality data parameters to FireBeetle ESP32 via serial communication in order to run the neural network model with the latest collected data. Then, turn the RGB LED to magenta.
After uploading and running the code for collecting air quality data and transferring the collected data to FireBeetle ESP32 via serial communication:
🎈⚠️📲 If the electrochemical gas sensors work accurately, the air station turns the RGB LED to blue and waits until the electrochemical gas sensors heat (warm-up) for 3 minutes.
🎈⚠️📲 The air station generates a data record from the recently collected air quality data and shows the collected data parameters on the SH1106 OLED screen to inform the user.
Nitrogen dioxide concentration (PPM)
Ozone concentration (PPB)
Temperature (°C)
Humidity (%)
Wind speed
🎈⚠️📲 If the control button (A) is pressed, Arduino Mega adds Clean as the selected air pollution level to the recently generated data record, transfers the modified data record to FireBeetle ESP32 via serial communication, and turns the RGB LED to green.
🎈⚠️📲 Then, it shows the unique monochrome icon of the selected air pollution level (class) on the SH1106 OLED screen.
🎈⚠️📲 If the control button (B) is pressed, Arduino Mega adds Risky as the selected air pollution level to the recently generated data record, transfers the modified data record to FireBeetle ESP32 via serial communication, and turns the RGB LED to yellow.
🎈⚠️📲 Then, it shows the unique monochrome icon of the selected air pollution level (class) on the SH1106 OLED screen.
🎈⚠️📲 If the control button (C) is pressed, Arduino Mega adds Unhealthy as the selected air pollution level to the recently generated data record, transfers the modified data record to FireBeetle ESP32 via serial communication, and turns the RGB LED to red.
🎈⚠️📲 Then, it shows the unique monochrome icon of the selected air pollution level (class) on the SH1106 OLED screen.
🎈⚠️📲 When FireBeetle ESP32 receives a data record, it creates a new CSV file under the samples folder on the SD card and combines the given air pollution level and the sample number as the file name. Then, FireBeetle ESP32 appends the received air quality data items with the given header to the created CSV file.
🎈⚠️📲 Also, FireBeetle ESP32 increments the sample number of the received air pollution level (class) by 1 to generate unique CSV files (samples).
You can get more detailed information on creating separate CSV files as samples in Step 6.
🎈⚠️📲 Every minute, Arduino Mega transmits the recently collected air quality data parameters to FireBeetle ESP32 via serial communication in order to obtain accurate prediction results after running an inference with the neural network model.
You can get more detailed information on running an inference with the neural network model in Step 7.
🎈⚠️📲 If Arduino Mega throws an error while operating, the air station shows the error message on the SH1106 OLED screen and prints the error details on the serial monitor.
🎈⚠️📲 Also, the air station prints notifications and sensor measurements on the serial monitor for debugging.
After collecting air quality data for nearly 2 months and creating separate CSV files for each data record on the SD card, I elicited my data set with eminent validity and veracity.
You can get more detailed information regarding assigning air pollution levels depending on the Air Quality Index (AQI) estimations provided by IQAir in Step 5.
Before collecting and storing the air quality data, I checked IQAir for the AQI estimation of my region. Then, I derived an air pollution class (level) from the AQI estimation provided by IQAir in order to assign a label empirically to my samples (data records).
Clean
Risky
Unhealthy
When I completed logging the collected data and assigning labels, I started to work on my artificial neural network model (ANN) to detect ambient air pollution levels so as to inform people with prescient warnings before air pollutants engender harmful effects on the respiratory system.
Since Edge Impulse supports almost every microcontroller and development board due to its model deployment options, I decided to utilize Edge Impulse to build my artificial neural network model. Also, Edge Impulse makes scaling embedded ML applications easier and faster for edge devices such as FireBeetle ESP32.
As of now, Edge Impulse supports CSV files to upload samples in different data structures thanks to its CSV Wizard. So, Edge Impulse lets the user upload all data records in a single file even if the data type is not time series. But, I usually needed to follow the steps below to format my data set saved in a single CSV file so as to train my model accurately:
Data Scaling (Normalizing)
Data Preprocessing
However, as explained in the previous steps, I employed FireBeetle ESP32 to save each data record to a separate CSV file on the SD card to create appropriately formatted samples (CSV files) for Edge Impulse. Therefore, I did not need to preprocess my data set before uploading samples.
Plausibly, Edge Impulse allows building predictive models optimized in size and accuracy automatically and deploying the trained model as an Arduino library. Therefore, after collecting my samples, I was able to build an accurate neural network model to predict air pollution levels and run it on FireBeetle ESP32 effortlessly.
As long as the CSV file includes a header defining data fields, Edge Impulse can distinguish data records as individual samples in different data structures thanks to its CSV Wizard while adding existing data to an Edge Impulse project. Therefore, there is no need for preprocessing single CSV file data sets even if the data type is not time series.
Since Edge Impulse can infer the uploaded sample's label from its file name, I employed FireBeetle ESP32 to create a new CSV file for each data record and name the generated files by combining the given air pollution level and the sample number incremented by 1 for each class (label) automatically.
Clean.training.sample_1.csv
Clean.training.sample_2.csv
Risky.training.sample_1.csv
Risky.training.sample_2.csv
Unhealthy.training.sample_1.csv
Unhealthy.training.sample_2.csv
After collecting air quality data and generating separate CSV files for nearly 2 months, I obtained my appropriately formatted samples on the SD card.
After generating training and testing samples successfully, I uploaded them to my project on Edge Impulse.
After uploading my training and testing samples successfully, I designed an impulse and trained it on air pollution levels (classes).
An impulse is a custom neural network model in Edge Impulse. I created my impulse by employing the Raw Data processing block and the Classification learning block.
The Raw Data processing block generate windows from data samples without any specific signal processing.
The Classification learning block represents a Keras neural network model. Also, it lets the user change the model settings, architecture, and layers.
According to my experiments with my neural network model, I modified the neural network settings and layers to build a neural network model with high accuracy and validity:
📌 Neural network settings:
Number of training cycles ➡ 50
Learning rate ➡ 0.005
Validation set size ➡ 15
📌 Extra layers:
Dense layer (20 neurons)
Dense layer (10 neurons)
After generating features and training my model with training samples, Edge Impulse evaluated the precision score (accuracy) as 95.7%.
The precision score (accuracy) is approximately 96% due to the modest volume and variety of training samples since I only collected ambient air quality data near my home. In technical terms, the model trains on limited validation samples to cover various regions. Therefore, I highly recommend retraining the model with local air quality data before running inferences in a different region.
After building and training my neural network model, I tested its accuracy and validity by utilizing testing samples.
The evaluated accuracy of the model is 97.78%.
After validating my neural network model, I deployed it as a fully optimized and customizable Arduino library.
After building, training, and deploying my model as an Arduino library on Edge Impulse, I needed to upload the generated Arduino library on FireBeetle ESP32 to run the model directly so as to create an easy-to-use and capable air station operating with minimal latency, memory usage, and power consumption.
Since Edge Impulse optimizes and formats signal processing, configuration, and learning blocks into a single package while deploying models as Arduino libraries, I was able to import my model effortlessly to run inferences.
After importing my model successfully to the Arduino IDE, I programmed FireBeetle ESP32 to run inferences to detect air pollution levels and capture surveillance footage for further examination every 5 minutes. If manual testing is required, FireBeetle ESP32 can also run inference and capture surveillance footage when the built-in button on the FireBeetle media board is pressed.
User Button (Built-in) ➡ Run Inference
Also, I employed FireBeetle ESP32 to transmit the collected air quality data, the model detection result, and the captured surveillance footage (raw image data) to the web application via an HTTP POST request after running inferences successfully.
You can download the AIoT_weather_station_run_model.ino file to try and inspect the code for running Edge Impulse neural network models and transferring data to a web application via FireBeetle ESP32.
⭐ Include the required libraries.
⭐ Define the required parameters to run an inference with the Edge Impulse model.
⭐ Define the features array (buffer) to classify one frame of data.
⭐ Define the threshold value (0.60) for the model outputs (predictions).
⭐ Define the air pollution level (class) names:
Clean
Risky
Unhealthy
⭐ Define the Wi-Fi network and the web application settings hosted by LattePanda 3 Delta 864.
⭐ Define the FireBeetle media board pin out to modify the official esp_camera library for the built-in OV7725 camera.
⭐ Define the camera (image) buffer array.
⭐ Create a struct including all air quality data items as its elements.
⭐ Disable the brownout detector to avoid system reset errors.
⭐ Initialize the hardware serial port (UART2) with redefined pins to communicate with Arduino Mega.
⭐ Initiate the built-in SD card module on the FireBeetle media board.
⭐ Define the pin configuration settings of the built-in OV7725 camera on the media board.
⭐ Define the pixel format and the frame size settings.
FRAMESIZE_QVGA (320x240)
⭐ Since FireBeetle ESP32 does not have SPI memory (PSRAM), disable PSRAM allocation to avoid connection errors.
⭐ Initiate the built-in OV7725 camera on the media board.
⭐ Initialize the Wi-Fi module.
⭐ Attempt to connect to the given Wi-Fi network.
⭐ Obtain the data packet and commands transferred by Arduino Mega via serial communication.
⭐ In the save_data_to_CSV function:
⭐ Create a new CSV file on the SD card with the given file name.
⭐ Append the header and the given data items to the generated CSV file.
⭐ If Arduino Mega sends the Save command with the recently collected air quality data and the selected air pollution level:
⭐ Glean information as substrings from the transferred data packet by utilizing the ampersand (&) as the delimiter.
⭐ Increment the sample number of the given level (class) by 1 to create unique samples (CSV files).
⭐ Create a new CSV file under the samples folder on the SD card and combine the given air pollution level and the sample number as the file name. Then, append the received air quality data items with the given header to the generated CSV file.
⭐ If Arduino Mega sends the Data command with the recently collected air quality data items:
⭐ Glean information as substrings from the transferred data packet by utilizing the comma (,) as the delimiter.
⭐ Convert the received data items from strings to their corresponding data types and copy them to the struct as its elements.
⭐ Finally, clear the transferred data packet.
⭐ In the run_inference_to_make_predictions function:
⭐ Scale (normalize) the collected data depending on the given model and copy the scaled data items to the features array (buffer).
⭐ If required, multiply the scaled data items while copying them to the features array (buffer).
⭐ Display the progress of copying data to the features buffer on the serial monitor.
⭐ If the features buffer is full, create a signal object from the features buffer (frame).
⭐ Then, run the classifier.
⭐ Print the inference timings on the serial monitor.
⭐ Obtain the prediction (detection) result for each given label and print them on the serial monitor.
⭐ The detection result greater than the given threshold (0.60) represents the most accurate label (air pollution level) predicted by the model.
⭐ Print the detected anomalies on the serial monitor, if any.
⭐ Finally, clear the features buffer (frame).
⭐ In the take_picture function:
⭐ Release the image buffer to avoid FireBeetle ESP32 from throwing memory allocation errors while capturing pictures sequentially.
⭐ Capture a picture with the built-in OV7725 camera on the FireBeetle media board and save it to the image buffer.
⭐ In the make_a_post_request function:
⭐ Connect to the web application named weather_station_data_center.
⭐ Create the query string and add the latest received air quality data items with the model detection result to the string as URL query (GET) parameters.
⭐ Define the boundary parameter named EnvNotification so as to send the captured raw image data as a TXT file to the web application.
⭐ Get the total content length.
⭐ Set the Connection HTTP header as Keep-Alive.
⭐ Make an HTTP POST request with the created query string to the web application in order to transfer the captured raw image data as a TXT file.
⭐ Release the image buffer.
⭐ Every 5 minutes or when the built-in button on the media board is pressed:
⭐ Start running an inference with the Edge Impulse model to make predictions on the air pollution levels (classes).
⭐ If the Edge Impulse model predicts an air quality level (class) successfully:
⭐ Create the request string consisting of the latest received air quality data items and the detected air pollution class.
⭐ Capture a picture with the built-in OV7725 camera on the media board.
⭐ Send the air quality data items, the model detection result, and the recently captured raw image as a TXT file to the web application via an HTTP POST request with URL query parameters.
⭐ Clear the predicted label (class).
⭐ Update the model timer.
My Edge Impulse neural network model predicts possibilities of labels (air pollution classes) for the given features buffer as an array of 3 numbers. They represent the model's "confidence" that the given features buffer corresponds to each of the three different air pollution levels (classes) [0 - 2], as shown in Step 5:
0 — Clean
1 — Risky
2 — Unhealthy
After executing the AIoT_weather_station_run_model.ino file on FireBeetle ESP32:
🎈⚠️📲 When FireBeetle ESP32 receives the latest collected air quality data parameters from Arduino Mega via serial communication, it stores them to run an inference with accurate data items.
🎈⚠️📲 After Arduino Mega sends the air quality data parameters via serial communication successfully, the air station turns the RGB LED to magenta.
🎈⚠️📲 Every 5 minutes, the air station runs an inference with the Edge Impulse neural network model by applying the stored air quality data parameters to predict the air pollution level and captures surveillance footage with the OV7725 camera.
🎈⚠️📲 Then, it transfers the stored air quality data parameters, the model detection result, and the captured surveillance footage (raw image data) as a TXT file to the web application via an HTTP POST request with URL query parameters.
🎈⚠️📲 If manual testing is required, the air station can also perform the mentioned sequence when the built-in button on the FireBeetle media board is pressed.
🎈⚠️📲 Also, the air station prints notifications and model detection results on the serial monitor for debugging.
As far as my experiments go, the air station detects air pollution levels precisely, captures real-time surveillance footage, and communicates with the web application faultlessly :)
By applying neural network models trained on air quality data in detecting air pollution levels, we can achieve to:
🎈⚠️📲 prevent human-made air pollutants from harming the respiratory system,
🎈⚠️📲 reduce the risk of increased asthma attacks and cardiovascular harm,
🎈⚠️📲 protect people with lung diseases from severe symptoms of air pollution,
🎈⚠️📲 provide prescient warnings regarding a surge in photochemical oxidant reactions and transmission rates.
[^1] Jarvis DJ, Adamkiewicz G, Heroux ME, et al, Nitrogen dioxide, WHO Guidelines for Indoor Air Quality: Selected Pollutants, Geneva: World Health Organization, 2010. 5, https://www.ncbi.nlm.nih.gov/books/NBK138707/
[^2] Nitrogen Dioxide, The American Lung Association, https://www.lung.org/clean-air/outdoors/what-makes-air-unhealthy/nitrogen-dioxide
[^3] Ground-level Ozone Basics, United States Environmental Protection Agency (EPA), https://www.epa.gov/ground-level-ozone-pollution/ground-level-ozone-basics
[^4] Health Effects of Ozone Pollution, United States Environmental Protection Agency (EPA), https://www.epa.gov/ground-level-ozone-pollution/health-effects-ozone-pollution
🎁🎨 Huge thanks to for sponsoring these products:
⭐ FireBeetle ESP32 |
⭐ FireBeetle Covers - Camera&Audio Media Board |
⭐ Gravity: Electrochemical Nitrogen Dioxide Sensor |
⭐ Gravity: Electrochemical Ozone Sensor |
⭐ Anemometer Kit |
⭐ LattePanda 3 Delta 864 |
⭐ DFRobot 8.9" 1920x1200 IPS Touch Display |
🎁🎨 Also, huge thanks to for sending me a , a , and a .
Before the first use, remove unnecessary cable ties and apply grease to the rails.
Test the nozzle and hot bed temperatures.
Go to Print Setup ➡ Auto leveling and adjust five predefined points automatically with the assisted leveling function.
Finally, place the filament into the integrated spool holder and feed the extruder with the filament.
Since the Sermoon V1 is not officially supported by Cura, download the latest version and copy the official printer settings provided by Creality, including Start G-code and End G-code, to a custom printer profile on Cura.
Since I wanted to improve my print quality and speed with Klipper, I decided to upgrade my Creality CR-200B 3D Printer with the .
Although the Sonic Pad is pre-configured for some Creality printers, it does not support the CR-200B officially yet. Therefore, I needed to add the CR-200B as a user-defined printer to the Sonic Pad. Since the Sonic Pad needs unsupported printers to be flashed with the self-compiled Klipper firmware before connection, I flashed my CR-200B with the required Klipper firmware settings via FluiddPI by following .
If you do not know how to write a printer configuration file for Klipper, you can download the stock CR-200B configuration file from .
After flashing the CR-200B with the Klipper firmware, copy the configuration file (printer.cfg) to a USB drive and connect the drive to the Sonic Pad.
After setting up the Sonic Pad, select Other models. Then, load the printer.cfg file.
After connecting the Sonic Pad to the CR-200B successfully via a USB cable, the Sonic Pad starts the self-testing procedure, which allows the user to test printer functions and level the bed.
After completing setting up the printer, the Sonic Pad lets the user control all functions provided by the Klipper firmware.
In Cura, export the sliced model in the ufp format. After uploading .ufp files to the Sonic Pad via the USB drive, it converts them to sliced G-code files automatically.
Also, the Sonic Pad can display model preview pictures generated by Cura with the Create Thumbnail script.
First of all, I soldered female pin headers to and male pin headers to before attaching the OV7725 camera.
When and are powered up for the first time, both sensors require operating for about 24-48 hours to generate calibrated and stable gas concentrations. In my case, I was able to obtain stable results after 30 hours of warming up. Although the electrochemical sensors need to be calibrated once, they have a preheat (warm-up) time of about 5 minutes to evaluate gas concentrations accurately after being interrupted.
Since requires a 9-24V supply voltage and generates a 0-5V output voltage (signal), I connected a USB buck-boost converter board to my Xiaomi power bank to elicit a stable 20V supply voltage to power the anemometer.
Since I have got a test sample of the brand-new , I decided to host my web application on LattePanda 3 Delta. Therefore, I needed to set up a LAMP web server.
First of all, install and set up .
Then, go to the XAMPP Control Panel and click the MySQL Admin button.
Once the phpMyAdmin tool pops up, create a new database named air_quality_aiot.
After adding the database successfully, go to the SQL section to create a MySQL database table named entries with the required data fields.
When FireBeetle ESP32 transmits the collected air quality data with the model detection result, the web application saves the received information to the MySQL database table — entries.
Although DFRobot provides a specific driver package and library for FireBeetle ESP32 and its media board, I encountered some issues while running different sensor libraries in combination with the provided media board library. Therefore, I decided to utilize and modify its esp_camera library settings to make it compatible with the FireBeetle media board.
To add the Arduino-ESP32 board package to the Arduino IDE, navigate to File ➡ Preferences and paste the URL below under Additional Boards Manager URLs.
Then, to install the required core, navigate to Tools ➡ Board ➡ Boards Manager and search for esp32.
After installing the core, navigate to Tools ➡ Board ➡ ESP32 Arduino and select FireBeetle-ESP32.
Finally, download the required libraries to utilize the sensors and the SH1106 OLED screen with Arduino Mega:
DFRobot_MultiGasSensor |
DFRobot_OzoneSensor |
DHT-sensor-library |
Adafruit_SH1106 |
Adafruit-GFX-Library |
First of all, download the .
Then, upload a monochromatic bitmap and select Vertical or Horizontal depending on the screen type.
Convert the image (bitmap) and save the output (data array).
Finally, add the data array to the code and print it on the screen.
In this project, I needed to utilize accurate air pollution levels for each data record composed of the air quality data I collected. Therefore, I needed to obtain local Air Quality Index (AQI) estimations for my region. Since calculates the Air Quality Index (AQI) estimations based on satellite PM2.5 data for locations lacking ground-based air monitoring stations and provides hourly AQI estimations with air quality levels by location, I decided to employ IQAir to obtain local AQI estimations.
You can inspect as a public project.
First of all, sign up for and create a new project.
Navigate to the Data acquisition page and click the Upload existing data button.
Before uploading samples, go to the CSV Wizard to set the rules to process all uploaded CSV files.
Upload a CSV file example to check data fields and items.
Select the data structure (time-series data or not).
Define a column to obtain labels for each data record if it is a single CSV file data set.
Then, define the columns containing data items and click the Finish wizard button.
After setting the rules, choose the data category (training or testing) and select Infer from filename under Label to deduce labels from CSV file names automatically.
Finally, select CSV files and click the Begin upload button.
Go to the Create impulse page. Then, select the Raw Data processing block and the Classification learning block. Finally, click Save Impulse.
Before generating features for the neural network model, go to the Raw data page and click Save parameters.
After saving parameters, click Generate features to apply the Raw data processing block to training samples.
Finally, navigate to the Classifier page and click Start training.
To validate the trained model, go to the Model testing page and click Classify all.
To deploy the validated model as an Arduino library, navigate to the Deployment page and select Arduino library.
Then, choose the Quantized (int8) optimization option to get the best performance possible while running the deployed model.
Finally, click Build to download the model as an Arduino library.
After downloading the model as an Arduino library in the ZIP file format, go to Sketch ➡ Include Library ➡ Add .ZIP Library...
Then, include the AI-assisted_Air_Quality_Monitor_inferencing.h file to import the Edge Impulse neural network model.
Assess water pollution levels based on applied chemical water quality tests and ultrasonic scanning for air bubbles.
Created By: Kutluhan Aktar
Public Project Link: https://studio.edgeimpulse.com/public/366673/latest
Even though most of us consider water contamination as a gradual occurrence, especially for thriving landscapes, impending pollution can pervade water bodies instantaneously. In the case of enclosed water bodies such as closed lakes, pernicious contaminants can arise in a week and threaten aquatic environments despite not manifesting any indications. These furtively spreading pollutants can even impinge on the health of terrestrial animals by not only poisoning precious water sources but also withering aquatic plants.
In most cases, conducting only surface-level water quality tests is insufficient to pinpoint the hiding pollutants since contaminants can form in the underwater substrate by the accumulation of the incipient chemical agents. These underwater chemical reactions are commonly instigated by detritus, industrial effluents, and toxic sediment rife in the underwater substrate. After the culmination of the sinking debris, these reactions can engender algal blooms, hypoxia (dead zones), and expanding barren lands[^1]. Since the mentioned occurrences are only the result of prolonged water pollution, they lead to the inexorable progress of complex toxic chemical interactions, even with plastic debris[^2]. Therefore, the precedence must be given to identifying the underlying conditions of increasing underwater chemical reactions.
Especially in lower substrate levels, before reaching hazardous amounts, the combined chemical reactions between pollutants yield ample gas molecules enough to accumulate small-to-moderate air bubbles underwater. These lurking gas pockets affect aquatic plant root systems, deliver noxious contaminants to the surface level, and alter water quality unpredictably due to prevalent emerging contaminants. As a result of the surge of toxic air gaps, the affected water body can undergo sporadic aquatic life declines, starting with invertebrate and fry (or hatchling) deaths. Although not all instances of underwater air bubble activity can be singled out as an imminent toxic pollution risk, they can be utilized as a vital indicator to test water quality to preclude any potential environmental hazards.
In addition to protecting natural enclosed water bodies, detecting the accumulating underwater pollutants can also be beneficial and profitable for commercial aquatic animal breeding or plant harvesting, widely known as aquaculture. Since aquaculture requires the controlled cultivation of aquatic organisms in artificially enclosed water bodies (freshwater or brackish water), such as fish ponds and aquariums, the inflation of underwater air bubbles transferring noxious pollutants to the surface can engender sudden animal deaths, wilting aquatic plants, and devastating financial losses. Especially for fish farming or pisciculture involving more demanding species, the accumulating air bubbles in the underwater substrate can initiate a chain reaction resulting in the loss of all fish acclimatized to the enclosed water body. In severe cases, this can lead to algae-clad artificial environments threatening terrestrial animals and the incessant decline in survival rates.
After perusing recent research papers on identifying air bubbles in the underwater substrate, I noticed that there are no practical applications focusing on detecting underwater air bubbles and assessing water pollution consecutively so as to diagnose potential toxic contaminants before instigating detrimental effects on the natural environment or a commercial fish farm. Therefore, I decided to develop a feature-rich AIoT device to identify underwater air bubbles via a neural network model by applying ultrasonic imaging as a nondestructive inspection method and to assess water pollution consecutively based on multiple chemical water quality tests via an object detection model. In addition to AI-powered functions, I also decided to build capable user interfaces and a push notification service via Telegram.
Before working on data collection procedures and model training, I thoroughly searched for a natural or artificial environment demonstrating the ebb and flow of underwater substrate toxicity due to overpopulation and decaying detritus. Nevertheless, I could not find a suitable option near my hometown because of endangered aquatic life, unrelenting habitat destruction, and disposal of chemical waste mostly caused by human-led activities. Thus, I decided to set up an artificial aquatic environment simulating noxious air bubbles in the underwater substrate and potential water pollution risk. After conducting a meticulous analysis of fecund aquatic life with which I can replicate fish farm conditions in a medium-sized aquarium, I decided to set up a planted freshwater aquarium for harmonious and proliferating species — livebearers (guppies), Neocaridina shrimp, dwarf (or least) crayfish (Cambarellus Diminutus), etc. In the following steps, I shall explain all species in my controlled environment (aquarium) with detailed instructions.
Since the crux of identifying underwater air bubbles and assessing water pollution simultaneously requires developing an AI-driven device supporting multiple machine learning models, I decided to construct two different data sets — ultrasonic scan data (buffer) and chemical water quality test result (color-coded) images, build two different machine learning models — neural network and object detection, and run the trained models on separate development boards. In this regard, I was able to program distinct and feature-rich user interfaces for each development board, focusing on a different aspect of the complex AI-based detection process, and thus avoid memory allocation issues, latency, reduced model accuracy, and intricate data collection methods due to multi-sensor conflict.
Since Nano ESP32 is a brand-new and high-performance Arduino IoT development board providing a u-blox® NORA-W106 (ESP32-S3) module, 16 MB (128 Mbit) Flash, and an embedded antenna, I decided to utilize Nano ESP32 to collect ultrasonic scan (imaging) information and run my neural network model. Since I needed to utilize submersible equipment to generate precise aquatic ultrasonic scans, I decided to connect a DFRobot URM15 - 75KHZ ultrasonic sensor (via RS485-to-UART adapter module) and a DS18B20 waterproof temperature sensor to Nano ESP32. To produce accurate ultrasonic images from single data points and match the given image shape (20 x 20 — 400 points), I added a DFRobot 6-axis accelerometer. Finally, I connected an SSD1306 OLED display and four control buttons to program a feature-rich user interface.
I also employed Nano ESP32 to transfer the produced ultrasonic scan data and the selected air bubble class to a basic web application (developed in PHP) via an HTTP POST request. In this regard, I was able to save each ultrasonic scan buffer with its assigned air bubble class to a separate text (TXT) file and construct my data set effortlessly. I shall clarify the remaining web application features below.
After completing constructing my ultrasonic scan data set, I built my artificial neural network model (ANN) with Edge Impulse to identify noxious air bubbles lurking in the underwater substrate. Considering the unique structure of ultrasonic imaging data, I employed the built-in Ridge classifier as the model classifier, provided by Edge Impulse Enterprise. As a logistic regression method with L2 regularization, the Ridge classification combines conventional classification techniques and the Ridge regression for multi-class classification tasks. Since Edge Impulse is nearly compatible with all microcontrollers and development boards, even for complex Sklearn linear models, I have not encountered any issues while uploading and running my advanced model on Nano ESP32. As labels, I simply differentiate the ultrasonic scan samples depending on the underwater air bubble presence:
normal
bubble
After training and testing my neural network model with the Ridge classifier, I deployed the model as an Arduino library and uploaded it to Nano ESP32. Therefore, the device is capable of identifying underwater air bubbles by running the neural network model without any additional procedures or latency.
Since UNIHIKER is an exceptionally compact single-board computer providing a built-in touchscreen, integrated Python modules, and micro:bit-compatible edge connector support, I decided to utilize UNIHIKER to collect chemical water quality test result (color-coded) images and run my object detection model. To capture image samples of multiple water quality tests, I connected a high-quality USB webcam to UNIHIKER. Then, I programmed a feature-rich user interface (GUI) and displayed the interactive interface on the built-in touchscreen by employing the integrated Python modules on Thonny.
After completing constructing my image data set, I built my object detection model with Edge Impulse to assess water pollution levels based on the applied chemical water quality tests. Since detecting water pollution levels based on color-coded chemical water quality tests is a complicated task, I decided to employ a highly advanced machine learning algorithm from the NVIDIA TAO Toolkit fully supported by Edge Impulse Enterprise — RetinaNet (which is an exceptional algorithm for detecting smaller objects). Since Edge Impulse Enterprise provides configurable backbones for RetinaNet and is compatible with nearly every development board, I have not encountered any issues while uploading and running my NVIDIA TAO RetinaNet object detection model on UNIHIKER. As labels, I utilized empirically assigned pollution levels while observing chemical water tests:
sterile
dangerous
polluted
After training and testing my RetinaNet object detection model, I deployed the model as a Linux (AARCH64) application (.eim) and uploaded it to UNIHIKER. Thus, the device is capable of assessing water pollution levels based on the applied chemical water quality tests by running the object detection model independently without any additional procedures, reduced accuracy, or latency.
Even though this underwater air bubble and water pollution detection device is composed of two separate development boards, I focused on building full-fledged AIoT features with seamless integration and enabling the user to access the interconnected device features within feature-rich and easy-to-use interfaces. Therefore, I decided to develop a versatile web application from scratch in order to obtain the generated ultrasonic scan buffer (20 x 20 — 400 data points) with the selected air bubble class via an HTTP POST request from Nano ESP32 and save the received information as text (TXT) file. Furthermore, similar to the ultrasonic scan samples, the web application can save model detection results — buffer passed to the neural network model and the detected label — as text files in a separate folder.
Then, I employed the web application to communicate with UNIHIKER to generate a pre-formatted CSV file from the stored sample text files (ultrasonic scan data records) and transfer the latest neural network model detection result (ultrasonic scan buffer and the detected label) via an HTTP GET request.
As mentioned repeatedly, each generated ultrasonic scan buffer provides 400 data points as a 20 x 20 ultrasonic image despite the fact that Nano ESP32 cannot utilize the given buffer to produce an ultrasonic image after running the neural network model with the Ridge classifier. Therefore, after receiving the latest model detection result via the web application, I employed UNIKIHER to modify a template image (black square) via the built-in OpenCV functions to convert the given ultrasonic scan buffer to a JPG file and save the modified image to visualize the latest aquatic ultrasonic scan with thoroughly encoded pixels.
Since the RetinaNet object detection model provides accurate bounding box measurements, I also utilized UNIHIKER to modify the resulting images to draw the associated bounding boxes and save the modified resulting images as JPG files for further inspection.
After conducting experiments with both models and producing significant results, I decided to set up a basic Telegram bot to inform the user of the latest model detection results by transferring the latest generated ultrasonic image with the detected air bubble class and the latest modified resulting image of the object detection model. Since Telegram is a cross-platform cloud-based messaging service with a fully supported HTTP-based Bot API, Telegram bots can receive images from the local storage directly without requiring a hosting service. Thus, I was able to transfer the modified images (of both models) from UNIHIKER to the Telegram bot without establishing an SSL connection.
Considering the tricky operating conditions near an aquaculture facility and providing a single-unit device structure, I decided to design a unique PCB after testing all connections of the prototype via breadboards. Since I wanted my PCB design to represent the enchanting and mystique underwater aquatic life, I decided to design a Squid-inspired PCB. Thanks to the micro:bit-compatible edge connector, I was able to attach all components and development boards to the PCB smoothly.
Finally, to make the device as robust and compact as possible, I designed a complementing Aquatic-themed case (3D printable) with a modular holder encasing the PCB outline, a hang-on aquarium connector mountable to the PCB holder, a hang-on camera holder to place the high-quality USB webcam when standing idle, and a removable top cover allowing the user to attach sensors to the assigned slots. The semicircular-shaped mounting brackets are specifically designed to contain the waterproof temperature sensor.
So, this is my project in a nutshell 😃
In the following steps, you can find more detailed information on coding, collecting ultrasonic scan information, capturing chemical water quality test result images, building neural network and object detection models with Edge Impulse Enterprise, running the trained models on Nano ESP32 and UNIHIKER, developing a versatile web application, and setting up a Telegram bot to inform the user via push notifications.
⭐ UNIHIKER | Inspect
⭐ URM15 - 75KHZ Ultrasonic Sensor | Inspect
⭐ Gravity: RS485-to-UART Signal Adapter Module | Inspect
⭐ Serial 6-Axis Accelerometer | Inspect
⭐ LattePanda 3 Delta 864 | Inspect
Before prototyping my Squid-inspired PCB design, I inspected the detailed pin reference of Nano ESP32, the micro:bit connector-based UNIHIKER pinout, and the supported transmission protocols of the measurement sensors. Then, I checked the wireless (Wi-Fi) communication quality between Nano ESP32, UNIHIKER, and the web application while transferring and receiving data packets.
Then, I designed my Squid-inspired PCB by utilizing Autodesk Fusion 360 and KiCad. Since I wanted to design a distinctive 3D-printed hang-on holder to simplify the PCB placement on the simulated fish farm (aquarium), I created the PCB outline on Fusion 360 and then imported the outline file (DXF) to KiCad. As mentioned earlier, I decided to utilize squid as the fulcrum of my PCB design since I wanted my device to resemble the enchanting underwater atmosphere.
To replicate this air bubble and water pollution detection device, you can download the Gerber file below and order my PCB design from ELECROW directly.
Normally, it would not be possible to embed most of the commercial single-board computers directly into a PCB without applying arduous disassembly methods. Nevertheless, UNIHIKER provides a micro:bit-compatible connector to access the GPIO interface of the microcontroller coprocessor (RISC-V). Therefore, I was able to embed UNIHIKER as the centerpiece of my PCB by utilizing a micro:bit-compatible edge connector from Kitronik.
If you want to add the Kitronik edge connector to your PCB designs, you can inspect its KiCad component library and footprint.
By utilizing a TS100 soldering iron, I attached headers (female), a DS18B20 waterproof temperature sensor, a Kitronik micro:bit-compatible edge connector, pushbuttons (6x6), 5 mm common anode RGB LEDs, a 4.7K resistor, and a power jack to the Squid PCB.
📌 Component list on the PCB:
A1 (Headers for Arduino Nano ESP32)
UNIHIKER1 (Kitronik micro:bit-compatible Edge Connector)
DS18B20 (DS18B20 Waterproof Temperature Sensor)
URM15 (Headers for RS485-to-UART Signal Adapter Module)
ACC1 (Headers for Serial 6-Axis Accelerometer)
S1 (Headers for SSD1306 OLED Display)
R1 (4.7K Resistor)
K1, K2, K3, K4 (6x6 Pushbutton)
D1, D2 (5 mm Common Anode RGB LED)
J2 (Headers for Available UNIHIKER Pins)
J1 (Power Jack)
Since I focused on building a feature-rich and accessible AI-powered device that identifies noxious underwater air bubbles via aquatic ultrasonic scans and evaluates water pollution based on chemical water quality tests via object detection so as to inform the user via Telegram push notifications, I decided to design a robust and modular case allowing the user to hang the Squid PCB on the aquarium, place the high-quality USB webcam when standing idle, and position the ultrasonic sensor effortlessly while scanning underwater substrate. To avoid overexposure to water and prevent open wire connections from short circuits, I added a removable top cover mountable to the main case via snap-fit joints. The semicircular-shaped mounting brackets on the top cover let the user attach the DS18B20 waterproof temperature sensor effortlessly. Then, I designed a unique PCB holder encasing the Squid PCB outline and a hang-on aquarium connector mountable to the PCB holder via M3 screws and nuts. To place the high-quality USB webcam when standing idle, I also designed a hang-on camera holder attachable to the side of the aquarium. Furthermore, I decided to emboss aquatic life with sound-based graphic icons on the removable top cover and the camera symbol on the camera holder to highlight the qualifications of this AI-powered underwater air bubble detection device.
Since I needed to position the URM15 ultrasonic sensor accurately while scanning the underwater substrate and generating data buffers, I added a special cylindrical slot to the end point of the L-shaped main case in order to fasten the ultrasonic sensor seamlessly.
I designed the L-shaped main case, the removable top cover, the Squid PCB holder, the hang-on aquarium connector of the PCB holder, and the hang-on camera holder in Autodesk Fusion 360. You can download their STL files below.
Then, I sliced all 3D models (STL files) in Ultimaker Cura.
Since I wanted to create a mystique watery structure for the device case and apply a unique underwater theme representing the mesmerizing aquatic life, I utilized these PLA filaments:
ePLA-Silk Magic Green-Blue (main case and top cover)
ePLA-Matte Light Blue (PCB holder and hang-on connectors)
Finally, I printed all parts (models) with my brand-new Anycubic Kobra 2 Max 3D Printer.
After printing all parts (models), I attached the URM15 ultrasonic sensor into its special cylindrical slot on the end point of the L-shaped main case and fastened the remaining components to their corresponding slots within the main case via a hot glue gun.
Then, I fastened the Squid PCB to its unique PCB holder via the hot glue gun, encasing the PCB outline. After fastening the Squid PCB, I attached the hang-on aquarium connector to the back of the PCB holder via M3 screws and nuts.
Since the removable top cover has special semicircular-shaped mounting brackets, the DS18B20 waterproof temperature sensor can externally be attached to the top cover.
Finally, I affixed the top cover to the main case via its provided snap-fit joints.
Since the main case contains all cables required for the connections between the Squid PCB and sensors, the device provides a single-unit structure and operates without wiring redundancy.
As explained earlier, before working on data collection procedures, I needed to find a natural or artificial environment demonstrating the ebb and flow of underwater substrate toxicity and water quality fluctuations due to overpopulation and decaying detritus. Unfortunately, I could not find a suitable natural environment near my hometown due to endangered aquatic life, unrelenting habitat destruction, and disposal of chemical waste mostly caused by human-led activities. Since I did not have access to an aquaculture facility to observe underwater substrate toxicity because of commercial aquatic animal breeding or plant harvesting, I decided to set up an artificial aquatic environment simulating noxious air bubbles in the underwater substrate and potential water pollution risk. Instead of setting up a small artificial garden pond for the commercial breeding of profitable fish (mostly for food), I decided to utilize a medium-sized aquarium (10 gallons) to replicate fish farm (or pisciculture) conditions.
Since this aquarium setting let me inspect the abrupt changes in the lower underwater substrate, I was able to conduct precise experiments to collect aquatic ultrasonic scan data for air bubble identification with ultrasonic imaging and capture chemical water quality test result (color-coded) images for water pollution detection.
After conducting a painstaking analysis of prolific aquatic life with which I can observe commercial fish farm conditions affecting the lower underwater substrate with noxious air bubbles and exacerbating the decreasing water quality due to decaying detritus, I decided to set up a planted freshwater aquarium for harmonious and proliferating species that can thrive in a small freshwater aquarium — livebearers (guppies), Neocaridina shrimp, dwarf (or least) crayfish (Cambarellus Diminutus), etc.
To set up a self-sustaining aquarium manifesting harsh fish farm conditions, I added these aquatic species:
🐠 Mosaic Dumbo Ear Guppies
🐠 Snow White Guppies
🐠 Half Black Guppies
🐠 Green Snakeskin Cobra Guppies
🐠 Red Rose Guppies
🦐 Red Sakura Neocaridina Shrimps
🦐 Black Rose Neocaridina Shrimps
🦐 Vietnam Leopard Neocaridina Shrimps
🦐 Blue Angel Neocaridina Shrimps
🦐 Sakura Orange Neocaridina Shrimps
🦐 Red Rili Neocaridina Shrimps
🦐 Carbon Rili Neocaridina Shrimps
🦐 Green Jelly Neocaridina Shrimps
🦐 Yellow Fire Neon Neocaridina Shrimps
🦞 Cambarellus Diminutus — Dwarf (or least) Crayfish
🐌 Yellow Mystery (Apple) Snails
🐌 Blue Mystery (Apple) Snails
🐌 Black Poso Rabbit Snails
🐌 Bumblebee Horn (Nerite) Snails
🐌 Ramshorn Snails (removed due to overpopulation)
After deciding on the fecund aquatic species for my aquarium, I allowed them to spawn and breed for nearly five months and observed the changes in the aquarium due to overbreeding and decaying detritus.
After my submerged aquatic plants, floating plants (frogbit and duckweed), and emersed (root-submerged) pothos flourished, they filtered free ammonia, nitrates, and phosphates, diminished excess algae, and provided oxygen. Therefore, I was able to eliminate the accumulating contaminants caused by the regular feeding schedule in a small aquarium and focus on detecting the underwater air bubbles and assessing water pollution due to prolonged overbreeding and decaying detritus.
⚠️ Disclaimer: To simulate the abrupt water quality fluctuations of a commercial fish farm, I let my aquarium go overstock with guppy fry and shrimplets, which led to the accumulation of excess waste, occasional Ramshorn snail blooms, and sporadic algae blooms. Thus, to maintain the ideal experiment conditions for identifying noxious air bubbles lurking in the underwater substrate, I needed to do regular water changes, sometimes every four days. After completing my experiments, I safely transferred abundant guppies and shrimps to my local fish store.
After concluding the device assembly, I hung the Squid PCB holder and the camera holder on the front side of the aquarium while collecting ultrasonic scan data buffers and capturing chemical water quality test result images. In this regard, I was able to place the high-quality USB webcam on the hang-on camera holder when standing idle and position the URM15 ultrasonic sensor precisely while scanning the underwater substrate to produce accurate ultrasonic images.
Since I designed a single-unit device structure, I did not encounter any issues while conducting extended experiments.
To increase the bottom surface area and observe abundant noxious air bubbles while collecting ultrasonic scan data, I added umpteen marimo moss balls covering the bottom of the tank. In this regard, I was able to provide plentiful underwater substrate gaps for incipient air bubbles to accumulate.
BotFather is an official Telegram bot that lets the user build and manage bots within the Telegram app without any coding or subscription required. I utilized BotFather to create a simple Telegram bot to inform the user via push notifications.
Aquatic Ultrasonic Imaging and Water Testing
aquatic_ultrasonic_bot
Since I wanted to send push notifications via the HTTP-based Telegram Bot API from UNIHIKER but not retrieve information back, I did not need to establish an SSL connection to set a webhook for the Telegram Bot API.
Thanks to the official Telegram Bot API, I only needed to obtain the chat id parameter to be able to send push notifications with the secured Telegram bot authorization token.
To fetch the required chat id, I utilized the getUpdates method (HTTP GET request), which shows all incoming bot updates by using long polling and returns an array of Update objects.
https://api.telegram.org/bot<token>/getUpdates
message ➡ chat ➡ id ➡ 6465514194
Since I needed to obtain the ultrasonic scan data buffers and the given air bubble class from Nano ESP32 so as to save the data records as text (TXT) files, I decided to develop a basic web application.
Also, the web application can generate a pre-formatted CSV file from the stored data records (text files) when requested via an HTTP GET request to construct a data set effortlessly.
In addition to the data collection features, similar to the ultrasonic scan samples, the web application can save model detection results transferred by Nano ESP32 via an HTTP POST request — buffer passed to the neural network model and the detected air bubble label — as text files in a separate folder.
As shown below, the web application consists of two folders and two code files:
/detection
/sample
generate.php
index.php
scan_data_items.csv
📁 index.php
⭐ Obtain the current date and time.
⭐ Initiate the text file name for the received ultrasonic scan data buffer by adding the collection or prediction date.
⭐ If Nano ESP32 transfers the data type and the selected or detected air bubble class for the received ultrasonic scan buffer via GET query parameters, modify the text file name accordingly. Then, select the folder to save the generated text file — sample or detection.
⭐ If Nano ESP32 transfers an ultrasonic scan data buffer via an HTTP POST request as a new sample or after running the neural network model, save the received buffer with the selected or detected air bubble class to the folder associated with the given data type as a TXT file — sample or detection.
📁 generate.php
⭐ In the read_scans function:
⭐ Get all text file paths under the sample folder via the built-in glob function.
⭐ Read each text file to obtain the saved ultrasonic scan data buffers.
⭐ Derive the selected air bubble class of the data record from the given text file name.
⭐ Then, remove the redundant comma from the end of the data record.
⭐ After decoding 400 comma-separated data points from the given data record, append the retrieved data items with the selected class as a child array to the information array (parent) by utilizing built-in array_merge and array_push functions.
⭐ Finally, return the modified parent array consisting of the fetched data items.
⭐ In the create_CSV function:
⭐ Obtain the generated parent array, including data items and the assigned class for each stored ultrasonic scan data record — sample.
⭐ Create a new CSV file — scan_data_items.csv.
⭐ Define and add the header (class and data fields) to the created CSV file.
⭐ Append each child array (element) of the parent array as a new row to the CSV file.
⭐ Finally, close the generated CSV file.
⭐ In the get_latest_detection function:
⭐ Via the built-in scandir function, obtain the latest model detection result saved as a text file under the detection folder — ultrasonic scan buffer passed to the neural network model.
⭐ Derive the detected air bubble label from the given file name.
⭐ Remove the redundant comma from the end of the given buffer.
⭐ Add the detected label to the revised buffer.
⭐ Then, pass the generated data packet as a string.
⭐ If requested by the user via an HTTP GET request, create a pre-formatted CSV file from the stored aquatic ultrasonic scan samples (text files) — data records.
⭐ If requested by the user via an HTTP GET request, obtain the latest model detection result — ultrasonic scan buffer passed to the neural network model and the detected air bubble class — and return the generated data packet as a string.
Since I wanted to build a feasible and accessible AIoT underwater air bubble and water pollution detection device not dependent on cloud or hosting services, I decided to host my web application on LattePanda 3 Delta 864. Therefore, I needed to set up a LAMP web server.
LattePanda 3 Delta is a pocket-sized hackable computer that provides ultra performance with the Intel 11th-generation Celeron N5105 processor.
Plausibly, LattePanda 3 Delta can run the XAMPP application. So, it is effortless to create a server with a MariaDB database on LattePanda 3 Delta.
Since Nano ESP32 has the well-known Nano form and provides Wi-Fi connectivity via the u-blox® NORA-W106 (ESP32-S3) module, I decided to employ Nano ESP32 to transfer data packets directly to the web application, including the produced aquatic ultrasonic scan buffer, the selected air bubble class for samples, and the detected air bubble label after running the neural network model.
Nevertheless, before proceeding with the following steps, I needed to set Nano ESP32 on the Arduino IDE, install the required libraries, and configure some default settings.
DFRobot_RTU | Download
DFRobot_WT61PC | Download
OneWire | Download
DallasTemperature | Download
Adafruit_SSD1306 | [Download(https://github.com/adafruit/Adafruit_SSD1306)]
Adafruit-GFX-Library | Download
⭐ In the logo.h file, I defined multi-dimensional arrays to group the assigned logos (interface and class) and their sizes — width and height.
Although UNIHIKER is an outstandingly compact single-board computer providing a built-in touchscreen, integrated Python modules, and a microcontroller coprocessor, I still needed to install the required Python modules and set up the necessary software before proceeding with the following steps.
Server (Host): 10.1.2.3
Account (Username): root
Password: dfrobot
sudo apt-get install libatlas-base-dev libportaudio2 libportaudiocpp0 portaudio19-dev python3-pip
pip3 install cython==0.29.36
pip3 install pyaudio edge_impulse_linux
After setting Nano ESP32 on the Arduino IDE, I programmed Nano ESP32 to initiate an aquatic ultrasonic scan, generate an ultrasonic scan data buffer according to the movements detected by the accelerometer, and transfer the generated ultrasonic scan buffer to the web application via an HTTP POST request.
Since I wanted to provide a feature-rich user interface allowing the user to assign labels while collecting data samples, I decided to connect the SSD1306 OLED display and four control buttons to Nano ESP32. Via the user interface, I was able to assign air bubble classes empirically and send the generated ultrasonic scan buffer with the selected air bubble class (label) directly to the web application. As mentioned earlier, Nano ESP32 does not provide an onboard storage option. Thus, by transferring samples to the web application, I obviated the need for connecting external storage to Nano ESP32.
Since Nano ESP32 features three hardware serial (UART) ports, excluding the USB serial port, I was able to connect multiple sensors requiring serial communication without a data transmission conflict.
As explained in the previous steps, the web application sorts the transferred data packet to save ultrasonic scan samples as text files named according to the assigned classes.
This AI-powered underwater air bubble detection device comprises two separate development boards — Nano ESP32 and UNIHIKER — performing interconnected features for data collection and running advanced AI models. Thus, the described code snippets show the different aspects of the same code file. Please refer to the code files below to inspect all interconnected functions in detail.
📁 AIoT_Aquatic_Ultrasonic_Imaging.ino
⭐ Include the required libraries.
⭐ Add the interface icons and the assigned class logos (converted C arrays) to be shown on the SSD1306 OLED display — logo.h.
⭐ Define the required server configurations for the web application hosted on LattePanda 3 Delta 864.
⭐ Then, initialize the WiFiClient object.
⭐ Define the buffer (array) and allocate the buffer size to save the ultrasonic scan data items — a 20 x 20 image (400 data points).
⭐ Define the required configuration parameters and the address to register settings for the URM15 ultrasonic sensor.
⭐ Define the modbus object and assign the hardware serial port (Serial1) to obtain the information generated by the ultrasonic sensor via the RS485-to-UART signal adapter module.
⭐ Define the accelerometer object and assign the hardware serial port (Serial2) to obtain the information generated by the 6-axis accelerometer via serial communication.
⭐ Define the required configuration settings for the DS18B20 waterproof temperature sensor.
⭐ Configure the SSD1306 OLED display.
⭐ Create a struct (_data) to list and access the information generated by the 6-axis accelerometer easily.
⭐ Initialize the first hardware serial port (Serial1) to communicate with the URM15 ultrasonic sensor via the RS485-to-UART signal adapter module.
⭐ Initialize the second hardware serial port (Serial2) to communicate with the 6-axis accelerometer.
⭐ Set the URM15 ultrasonic sensor to trigger mode, select the external temperature compensation, and enable the temperature compensation function by overwriting the control register variable — byte (LSB).
⭐ Initiate the 6-axis accelerometer and configure its data output frequency.
⭐ Initialize the DS18B20 temperature sensor.
⭐ Attempt to connect to the given Wi-Fi network and wait for the successful network connection.
⭐ In the make_a_post_request function:
⭐ Connect to the web application named Aquatic_Ultrasonic_Imaging.
⭐ Create the query string by adding the given URL query (GET) parameters, including buffer data type, the selected class, and the detected label.
⭐ Define the boundary parameter named UltrasonicScan so as to send the generated ultrasonic scan data buffer (400 points) as a text (TXT) file to the web application.
⭐ Get the total content (data packet) length.
⭐ Make an HTTP POST request with the created query string to the web application in order to transfer the generated ultrasonic scan data buffer as a TXT file with the selected class or the label detected by the neural network model.
⭐ Wait until transferring the ultrasonic scan (text) buffer.
⭐ In the read_ultrasonic_sensor function:
⭐ Configure the external temperature value by utilizing the evaluated water temperature to generate precise distance measurements.
⭐ Obtain the temperature-compensated distance measurement produced by the URM15 ultrasonic sensor, except if the sensor is out of range.
⭐ In the read_accelerometer function, obtain the X, Y, and Z-axis movement variables generated by the 6-axis accelerometer — acceleration, angular velocity, and angle.
⭐ In the get_temperature function, obtain the water temperature in Celsius, estimated by the DS18B20 waterproof temperature sensor.
⭐ In the ultrasonic_imaging function:
⭐ Detect real-time device motions by reviewing the movement variables (X-axis and Y-axis) generated by the 6-axis accelerometer — acceleration and angular velocity.
⭐ If the device is gradually moving underwater within an arbitrary square, collect the temperature-compensated distance measurements produced by the URM15 ultrasonic sensor and save them as data points until completing the ultrasonic scan data buffer — 20 x 20 (400 points).
⭐ Change the highlighted menu option by operating the onboard control buttons — A and C.
⭐ Show the selected (highlighted) menu option with its assigned interface icon on the SSD1306 OLED display.
⭐ After selecting a menu option, if the control button B is pressed, navigate to the highlighted interface (menu) option.
⭐ If the first option (Show Readings) is activated:
⭐ Obtain the information produced by the ultrasonic sensor and the accelerometer.
⭐ Then, display the assigned interface logo and the retrieved sensor information on the SSD1306 screen for debugging.
⭐ If the control button D is pressed, redirect the user to the home screen.
⭐ If the second option (Ultrasonic+++) is activated:
⭐ Obtain the information produced by the ultrasonic sensor and the accelerometer.
⭐ Initiate the ultrasonic image scanning procedure and save data points until completing the scan buffer — 20 x 20 (400 points).
⭐ Display the ultrasonic scan progress (collected points) on the SSD1306 screen.
⭐ If the control button D is pressed, redirect the user to the home screen.
⭐ If the third option (Save Samples) is activated:
⭐ Display the selectable labels (air bubble classes) with their associated buttons.
⭐ Via the onboard control buttons (A and C), assign an air bubble class (normal or bubble) to the produced ultrasonic scan data buffer.
⭐ With the passed label, transfer the data type (sample or detection) and the given ultrasonic scan data buffer by making an HTTP POST request to the web application.
⭐ According to the data transmission success, notify the user by showing the associated connection icon on the screen.
⭐ If the control button D is pressed, redirect the user to the home screen.
🐠📡💧📊 If Nano ESP32 connects to the Wi-Fi network successfully, the device shows the home screen with the menu (interface) options on the SSD1306 screen.
Show Readings
Ultrasonic+++
Save Samples
Run Inference
🐠📡💧📊 The device lets the user change the highlighted menu option by pressing the control buttons — A (↓) and C (↑).
🐠📡💧📊 While the user adjusts the highlighted menu option, the device displays the associated interface icon on the screen.
🐠📡💧📊 After highlighting a menu option, if the control button B is pressed, the device navigates to the selected option.
🐠📡💧📊 After activating a menu option, the device returns to the home screen if the user presses the control button D.
🐠📡💧📊 If the user activates the first menu option — Show Readings:
🐠📡💧📊 The device displays the information produced by the ultrasonic sensor and the accelerometer on the SSD1306 screen for debugging.
🐠📡💧📊 Then, the device turns the RGB LED (connected to Nano ESP32) to yellow.
🐠📡💧📊 If the user activates the second menu option — Ultrasonic+++:
🐠📡💧📊 The device turns the RGB LED to cyan.
🐠📡💧📊 The device detects real-time motions while the ultrasonic sensor is submerged by reviewing the movement variables produced by the 6-axis accelerometer — acceleration and angular velocity.
🐠📡💧📊 If the device is gradually moving underwater within an arbitrary square, Nano ESP32 collects the temperature-compensated distance measurements produced by the ultrasonic sensor and save them as data points until concluding the ultrasonic scan buffer — 20 x 20 (400 points).
🐠📡💧📊 After initiating the ultrasonic image scanning procedure, the device shows the scan progress (collected points) on the SSD1306 screen.
🐠📡💧📊 When Nano ESP32 completes collecting 400 data points of the scan buffer, the device notifies the user via the screen and turns the RGB LED to green.
🐠📡💧📊 If the user activates the third menu option — Save Samples:
🐠📡💧📊 The device turns the RGB LED to magenta and displays the selectable labels (air bubble classes) with their associated buttons.
A) Class => normal
C) Class => bubble
🐠📡💧📊 Via the onboard control buttons (A and C), the device lets the user assign an air bubble class (normal or bubble) to the generated ultrasonic scan data buffer empirically.
🐠📡💧📊 After pressing a control button (A or C), the device transfers the passed label and the generated ultrasonic scan data buffer to the web application via an HTTP POST request.
🐠📡💧📊 If Nano ESP32 transfers the given data packet successfully to the web application, the device notifies the user by showing the assigned connection icon on the screen and turning the RGB LED to green.
🐠📡💧📊 After receiving the ultrasonic scan buffer, the web application saves the buffer as a text (TXT) file (data record) to the sample folder by adding the passed label and the collection date to the file name.
sample_normal__2024_03_14_07_52_41.txt
sample_bubble__2024_04_03_16_53_08.txt
Since all underwater air bubble activity cannot be singled out as an imminent toxic pollution risk, I decided to enable this air bubble detection device with the ability to assess potential water pollution based on chemical water quality tests.
Even though there are various water quality tests for fish tanks, I decided to utilize color-coded chemical tests produced by the renowned full-range supplier for aquariums, ponds, and terrariums — sera. In this regard, I was able to make the object detection model determine the water pollution levels easily by the color discrepancies of the applied water quality tests.
After researching the most common indicators of water pollution in a retail fish farm, in this case, my overpopulated medium-sized aquarium simulating harsh fish farm conditions, I decided to apply these four water quality tests regularly:
After following the provided instructions thoroughly for each chemical test and observing the water quality levels (color codes) from a new water change state to the peak of the underwater air bubble activity, I managed to group water pollution levels into three categories:
sterile
dangerous
polluted
After setting up the necessary software on UNIHIKER via SSH and installing the required modules, I programmed UNIHIKER to capture the water quality test result images with the USB webcam and save them as samples.
Since I wanted to provide a feature-rich user interface to capture water quality test result image samples, assign labels, and access the interconnected features, I decided to program an interactive user interface (GUI — Tkinter application) with the integrated Python modules. Since UNIHIKER provides an onboard touchscreen and two control buttons, I did not need to connect additional components to display the user interface. Via the micro:bit-compatible edge connector on the Squid PCB, I added a secondary RGB LED to inform the user of the device status while performing operations related to UNIHIKER.
As explained earlier, I managed to group water pollution levels into three categories. Thus, I added the corresponding pollution levels as labels to the file names of each sample while capturing images to create a valid data set for the object detection model.
This AI-powered underwater air bubble detection device, assessing water pollution based on chemical tests, comprises two separate development boards — UNIHIKER and Nano ESP32 — performing interconnected features for data collection and running advanced AI models. Thus, the described code snippets show the different aspects of the same code file. Please refer to the code files below to inspect all interconnected functions in detail.
📁 _class.py
To bundle all functions under a specific structure, I created a class named aquarium_func. In the following steps, I will clarify the remaining functions of this class. Please refer to the _class.py file to inspect all interconnected functions.
⭐ In the display_camera_feed function:
⭐ Obtain the real-time video stream (frames) generated by the high-quality USB webcam.
⭐ Resize the latest captured camera frame depending on the provided image sample sizes of the Edge Impulse object detection model.
⭐ Then, resize the same frame to display a snapshot of the latest captured camera frame on the onboard touchscreen.
⭐ Stop the real-time camera feed if requested.
⭐ In the take_snapshot function:
⭐ Save the latest snapshot frame to a temporary image file — snapshot.jpg — since the built-in Python module for Tkinter-based GUI does not support images as numpy arrays.
⭐ Then, show the snapshot image saved in the assets folder on the onboard touchscreen in order to notify the user of the latest captured camera frame.
⭐ Finally, store the latest image (depicted via the snapshot) resized according to the given model's frame sizes as the latest sample for further usage.
⭐ In the save_img_sample function:
⭐ If the user selects a pollution class via the built-in control button B (on UNIHIKER), create the file name of the image sample by adding the selected class and the collection date.
⭐ Then, save the latest stored frame to the samples folder via the built-in OpenCV functions and notify the user via the user interface (GUI).
Home
Aquatic Ultrasonic Scan
Water Quality Test
⭐ In the create_user_interface function:
⭐ Design the feature-rich user interface via the provided unihiker module.
⭐ Group the generated GUI elements and their screen coordinates into separate arrays for each interface section (layer) so as to navigate windows effortlessly.
⭐ To add callback functions to the GUI elements, utilize the onclick parameter (triggered when the element is clicked) and the lambda expression.
⭐ In the board_configuration function:
⭐ Employ the built-in control buttons on UNIHIKER to provide a versatile user experience.
⭐ If the control button A (UNIHIKER) is pressed, navigate to the home screen.
⭐ If the control button B (UNIHIKER) is pressed, change the selected pollution class incrementally and adjust the background color of the Capture Sample button under the Water Quality Test section accordingly.
⭐ Also, adjust the secondary RGB LED according to the assigned class color.
⭐ In the interface_config function:
⭐ Depending on the passed command, process the GUI elements and their screen coordinates grouped under separate arrays for each section to shift windows (layers) effortlessly.
⭐ If requested, clear the selected pollution class.
Since the captured camera frame size is not compatible with the object detection model, I utilized the built-in OpenCV features to resize the captured frame according to the required dimensions for both the model and the user interface (snapshot).
After executing the main.py file on UNIHIKER:
🐠📡💧📊 The device displays the home screen, showing two main sections, on the built-in touchscreen of UNIHIKER.
Aquatic Ultrasonic Scan
Water Quality Test
🐠📡💧📊 If the user clicks the Water Quality Test button, the device opens the Water Quality Test section.
🐠📡💧📊 While obtaining real-time frames produced by the high-quality USB webcam, the device resizes the latest captured camera frame depending on the provided image frame size of the Edge Impulse object detection model.
🐠📡💧📊 Also, the device resizes the same frame as a smaller snapshot of the latest captured camera frame.
🐠📡💧📊 When the user clicks the Snapshot button, the device saves the latest generated snapshot image to a temporary image file since the built-in Python module for Tkinter-based GUI does not support images as numpy arrays. Then, the device stores the latest frame modified by the model frame size.
🐠📡💧📊 After saving frames, the device shows the latest snapshot image on the onboard touchscreen in order to notify the user of the latest stored camera frame.
🐠📡💧📊 If the user clicks the onboard control button B (on UNIHIKER), the device changes the selected pollution class incrementally and adjusts the background color of the Capture Sample button according to the assigned class color.
Green ➡ sterile
Yellow ➡ dangerous
Red ➡ polluted
🐠📡💧📊 After selecting a pollution class successfully, the device lets the user save an image sample by clicking the Capture Sample button.
🐠📡💧📊 To construct a comprehensive image data set, the device adds the selected class (label) and the collection date to each image sample file name.
IMG_sterile_20240330_120423.jpg
After collecting image samples of chemical water quality test results (color-coded), I constructed a valid and notable image data set for the object detection model.
As explained earlier, I set up a freshwater aquarium to simulate the harsh fish farm conditions leading to noxious air bubbles lurking in the underwater substrate.
Then, I utilized the URM15 (waterproof) ultrasonic sensor to generate ultrasonic scan buffers of the bottom of the tank, consisting of 400 data points as a 20 x 20 ultrasonic image. While collecting and saving aquatic ultrasonic scan buffers, I empirically differentiated the produced samples (data records) depending on the presence of toxic air bubbles:
normal
bubble
When I completed collecting aquatic ultrasonic scan data buffers via the web application, I started to work on my artificial neural network model (ANN) to identify toxic underwater air bubbles manifesting potential water pollution risk.
Since Edge Impulse provides developer-friendly tools for advanced AI applications and supports almost every development board due to its model deployment options, I decided to utilize Edge Impulse Enterprise to build my artificial neural network model. Also, Edge Impulse Enterprise incorporates state-of-the-art machine learning algorithms and scales them for edge devices such as Nano ESP32.
Furthermore, Edge Impulse provides an accessible tool named CSV Wizard, which lets the user inspect a single CSV file, select the data type, obtain the label and data item fields from the given header, and register the configuration settings for the subsequent CSV files.
Since I employed the web application to follow the steps below to generate a pre-formatted CSV file from all ultrasonic scan buffer samples saved as text files and to sort data items, I was able to process my data set effortlessly so as to train my neural network model accurately:
Data Scaling (Resizing)
Data Labeling
After processing my data set, I decided to apply an advanced machine learning algorithm to train my neural network model, considering the unique and intricate structure of aquatic ultrasonic imaging data. After conducting various experiments with different model classifiers on Edge Impulse, I employed the Ridge classifier supported by Edge Impulse Enterprise since it has provided the most accurate precision results for identifying underwater air bubbles.
As a logistic regression method with L2 regularization, the Ridge classification combines conventional classification techniques and the Ridge regression for multi-class classification tasks. Since the integrated L2 regularization lets the user penalize unnecessary features to enhance the model performance and control the penalization rate, the Ridge classifier gives the trained model the ability to adapt classification results to a regression framework and prevent overfitting via the adjusted hyperparameter alpha, regulating how the penalty affects the model coefficients.
Plausibly, Edge Impulse Enterprise allows building predictive models with enhanced machine learning algorithms optimized in size and accuracy and deploying the trained model as an Arduino library. Therefore, after formatting and processing my data set, I was able to build a valid neural network model with the Ridge classifier to identify toxic underwater air bubbles and run the optimized model on Nano ESP32 without any additional requirements.
You can inspect my neural network model with the Ridge classifier on Edge Impulse as a public project.
After generating training and testing samples successfully, I uploaded them to my project on Edge Impulse Enterprise.
After uploading and labeling my training and testing samples successfully, I designed an impulse and trained the model to identify noxious underwater air bubbles.
An impulse is a custom neural network model in Edge Impulse. I created my impulse by employing the Raw Data processing block and the Classification learning block.
The Raw Data processing block generates windows from data samples without applying any specific signal processing procedures.
The Classification learning block represents a Keras neural network model. This learning block lets the user change the model classifier, settings, architecture, and layers.
According to my experiments with my neural network model with the Ridge classifier, I modified the classification settings and the hyperparameter alpha to build a neural network model with high accuracy and validity:
📌 Neural network settings:
Alpha ➡ 0.4
Validation set size ➡ 5
After generating features and training my model with training samples, Edge Impulse evaluated the precision score (accuracy) as 100%.
The precision score (accuracy) is approximately 100% due to the modest volume of validation samples of ultrasonic scan buffers demonstrating toxic underwater air bubbles. As compared to other supported classifiers, the Ridge classifier produced the most accurate detections after adjusting the regularization strength according to my data set. Since I configured my neural network model to conform to my aquarium's conditions, I highly recommend retraining the model with aquatic ultrasonic scan samples from the targeted fish farm before running inferences to identify underwater air bubbles.
After building and training my neural network model with the Ridge classifier, I tested its accuracy and validity by utilizing testing samples.
The evaluated accuracy of the model is 100%.
After validating my neural network model, I deployed it as a fully optimized and customizable Arduino library.
When I completed capturing images of chemical water quality test results (color-coded) representing the most common indicators of water contamination in a retail fish farm and storing the captured samples on UNIHIKER, I started to work on my object detection (RetinaNet) model to assess water pollution levels.
Since Edge Impulse provides developer-friendly tools for advanced edge AI applications and supports almost every development board due to its model deployment options, I decided to utilize Edge Impulse Enterprise to build my object detection model. Also, Edge Impulse Enterprise incorporates elaborate model architectures for advanced computer vision applications and optimizes the state-of-the-art vision models for edge devices such as UNIHIKER.
Since assessing water pollution levels based on the applied chemical water quality tests (color-coded) is a complex computer vision task, I decided to employ an enhanced vision model architecture. After conducting experiments with the advanced algorithms supported by Edge Impulse Enterprise, I decided to utilize RetinaNet from the NVIDIA TAO Toolkit.
NVIDIA TAO Toolkit is a low-code AI toolkit built on TensorFlow and PyTorch, which simplifies the model training process and lets developers select one of 100+ pre-trained vision AI models with customization options. TAO provides an extensive selection of pre-trained models, either trained on public datasets or proprietary datasets for task-specific use cases. Since Edge Impulse Enterprise incorporates production-tested NVIDIA TAO vision models and provides configurable backbones (MobileNetV2, GoogLeNet, ResNet, etc.), fine-tuning RetinaNet to unique data sets and deploying optimized models for edge devices are efficient and user-friendly on Edge Impulse.
Even though Edge Impulse supports JPG or PNG files to upload as samples directly, each target object in a training or testing sample needs to be labeled manually. Therefore, I needed to follow the steps below to format my data set so as to train my object detection model accurately:
Data Scaling (Resizing)
Data Labeling
As explained earlier, I managed to group water pollution levels into three categories empirically while observing the water quality levels after applying chemical color-coded tests.
Since I added the mentioned pollution categories and the collection date to the file names while capturing images of water quality test results (color-coded), I preprocessed my data set effortlessly to label each target object on an image sample on Edge Impulse by utilizing the assigned pollution category:
sterile
dangerous
polluted
Plausibly, Edge Impulse Enterprise allows building advanced computer vision models optimized in size and accuracy efficiently and deploying the trained model as a supported firmware (Linux AARCH64) for UNIHIKER. Therefore, after scaling (resizing) and processing my image data set to label target objects, I was able to build a valid object detection model to assess water pollution based on the applied water quality tests, which runs on UNIHIKER without any additional requirements.
You can inspect my object detection (RetinaNet) model on Edge Impulse as a public project.
After collecting training and testing image samples, I uploaded them to my project on Edge Impulse. Then, I labeled each target object on the image samples.
After uploading my image data set successfully, I labeled each target object on the image samples by utilizing the assigned water pollution categories (classes). In Edge Impulse, labeling an object is as easy as dragging a box around it and entering a class. Also, Edge Impulse runs a tracking algorithm in the background while labeling objects, so it moves the bounding boxes automatically for the same target objects in subsequent images.
After labeling target objects on my training and testing samples successfully, I designed an impulse and trained the model on detecting water pollution levels based on the applied chemical water quality tests.
An impulse is a custom neural network model in Edge Impulse. I created my impulse by employing the Image preprocessing block and the Object Detection (Images) learning block.
The Image preprocessing block optionally turns the input image format to grayscale or RGB and generates a features array from the raw image.
The Object Detection (Images) learning block represents a machine learning algorithm that detects objects on the given image, distinguished between model labels.
In this case, I configured the input image format as RGB since the applied chemical water quality tests highly rely on color codes to distinguish quality levels.
Due to the NVIDIA TAO vision model requirements, the image width and height must be multiples of 32 while configuring the impulse.
To change the default computer vision model (algorithm), click the Choose a different model button and select the NVIDIA TAO RetinaNet model, providing superior performance on smaller objects.
Then, switch to GPU training since NVIDIA TAO models are GPU-optimized computer vision algorithms.
According to my rigorous experiments with my RetinaNet object detection model, I modified the model and augmentation settings to fine-tune the MobileNet v2 backbone so as to build an optimized object detection model with high accuracy and validity:
📌 Object Detection (Images) settings:
Backbone ➡ MobileNet v2 (3x224x224, 800 K params)
Number of training cycles ➡ 200
Minimum learning rate ➡ 0.012
Maximum learning rate ➡ 0.015
Random crop min scale ➡ 1.0
Random crop max scale ➡ 1.0
Random crop min aspect ratio ➡ 0.1
Random crop max aspect ratio ➡ 0.1
Zoom out min scale ➡ 1.0
Zoom out max scale ➡ 1.0
Validation set size ➡ 5
IoU threshold ➡ 0.95
Confidence threshold ➡ 0.001
Batch size ➡ 16
📌 Neural network architecture:
NVIDIA TAO RetinaNet (ENTERPRISE)
After generating features and training my RetinaNet model with training samples, Edge Impulse evaluated the precision score (accuracy) as 65.2%.
The precision score (accuracy) is approximately 66% due to the small volume of validation image samples of color-coded chemical water quality test results. Since the validation set only consists of two water pollution categories, the model attempts to validate only the passed categories (classes) instead of three while training. Therefore, I highly recommend retraining the model with the image samples of the water quality tests applied to the targeted retail fish farm before running inferences.
After building and training my RetinaNet object detection model, I tested its accuracy and validity by utilizing testing image samples.
The evaluated accuracy of the model is 88.89%.
After validating my object detection model, I deployed it as a fully optimized and customizable Linux (AARCH64) application (.eim).
After building, training, and deploying my neural network model with the Ridge classifier as an Arduino library on Edge Impulse, I needed to upload the generated Arduino library on Nano ESP32 to run the optimized model directly so as to identify toxic underwater air bubbles with minimal latency, memory usage, and power consumption.
Since Edge Impulse optimizes and formats signal processing, configuration, and learning blocks into a single package while deploying models as Arduino libraries, even for complex machine learning algorithms, I was able to import my advanced model effortlessly to run inferences.
After importing my model successfully to the Arduino IDE, I programmed Nano ESP32 to run inferences to identify noxious underwater air bubbles via aquatic ultrasonic scans.
Then, I employed Nano ESP32 to transfer the model detection results (buffer passed to the model and the detected air bubble class) to the web application via an HTTP POST request after running an inference successfully.
As mentioned earlier, the web application can also communicate with UNIHIKER to allow the user to access the stored model detection results in order to provide interconnected features.
Since the interconnected features for data collection and running advanced AI models are performed by two separate development boards (Nano ESP32 and UNIHIKER), the described code snippets show the different aspects of the same code file. Please refer to the code files below to inspect all interconnected functions in detail.
📁 AIoT_Aquatic_Ultrasonic_Imaging.ino
⭐ Define the required parameters to run an inference with the Edge Impulse neural network model with the Ridge classifier.
⭐ Define the threshold value (0.60) for the model outputs (predictions).
⭐ Define the air bubble class names.
⭐ In the run_inference_to_make_predictions function:
⭐ Summarize the Edge Impulse neural network model inference settings and print them on the serial monitor.
⭐ If the URM15 ultrasonic sensor produces an ultrasonic scan data buffer (20 x 20 image — 400 points) successfully:
⭐ Create a signal object from the resized (scaled) raw data buffer — ultrasonic scan buffer.
⭐ Run an inference with the Ridge classifier.
⭐ Print the inference timings on the serial monitor.
⭐ Obtain the prediction results for each label (class).
⭐ Print the model classification results on the serial monitor.
⭐ Get the imperative predicted label (class).
⭐ Print inference anomalies on the serial monitor, if any.
⭐ Release the previously generated ultrasonic scan buffer if requested.
⭐ In the show_interface function:
⭐ Create the home screen and menu option layouts with the assigned interface icons so as to elevate the user experience with an enhanced user interface.
⭐ If the fourth menu option (Run Inference) is activated:
⭐ Display the model inference options on the SSD1306 screen.
⭐ If the control button A is pressed, run an inference with the Edge Impulse neural network model with the Ridge classifier.
⭐ If the neural network model detects an air bubble class successfully, notify the user by showing the associated class icon on the SSD1306 screen.
⭐ After showing the detected class, if the control button C is pressed, transfer the model detection results (ultrasonic scan buffer passed to the model and the detected label) to the web application via an HTTP POST request.
⭐ According to the data transmission success, notify the user by showing the associated connection icon on the screen.
⭐ If the control button D is pressed, redirect the user to the home screen.
My Edge Impulse neural network model with the Ridge classifier predicts possibilities of labels (air bubble classes) for the passed ultrasonic scan data buffer as an array of 2 numbers. They represent the model's "confidence" that the given features buffer corresponds to each of the two different air bubble classes [0 - 1], as shown in Step 10:
0 — bubble
1 — normal
You can inspect overlapping user interface features, such as generating an ultrasonic scan buffer in the previous steps.
After setting up and running the optimized neural network model on Nano ESP32:
🐠📡💧📊 As explained in the previous steps, after initiating the ultrasonic image scanning procedure, the device allows the user to generate an ultrasonic scan data buffer — 20 x 20 (400 points).
🐠📡💧📊 If the user activates the fourth menu option — (Run Inference):
🐠📡💧📊 The device turns the RGB LED to white and displays the selectable inference options with their associated buttons.
A) Run Inference
C) Send: Pending
🐠📡💧📊 If the control button A is pressed, the device runs an inference with the neural network model to identify noxious underwater air bubbles by utilizing the produced aquatic ultrasonic scan buffer.
🐠📡💧📊 If the neural network model detects an air bubble class successfully, the device notifies the user by showing the associated class icon on the SSD1306 screen.
🐠📡💧📊 After displaying the detected class, if the control button C is pressed, the device transfers the model detection results (ultrasonic scan buffer passed to the model and the detected label) to the web application via an HTTP POST request.
🐠📡💧📊 If Nano ESP32 transfers the given data packet successfully to the web application, the device notifies the user by showing the assigned connection icon on the screen and turning the RGB LED to green.
🐠📡💧📊 Also, Nano ESP32 prints progression notifications on the serial monitor for debugging.
🐠📡💧📊 After receiving the ultrasonic scan data buffer passed to the model, the web application saves the received buffer as a text (TXT) file to the detection folder by adding the detected label and the prediction date to the file name.
detection_normal__2024_04_03_10_15_35.txt
detection_bubble__2024_04_03_10_20_52.txt
After building, training, and deploying my RetinaNet object detection model as a Linux (AARCH64) application on Edge Impulse, I needed to upload the generated Linux application to UNIHIKER to run the optimized model directly via the Linux Python SDK so as to create an accessible AI-powered water pollution detection device operating with minimal latency, memory usage, and power consumption.
Since Edge Impulse optimizes and formats signal processing, configuration, and learning blocks into a single EIM file while deploying models as a Linux (AARCH64) application, even for complex computer vision models from NVIDIA TAO, I was able to import my advanced model effortlessly to run inferences in Python.
sudo chmod 777 /root/aquarium/model/ai-based-aquatic-chemical-water-quality-testing-linux-aarch64.eim
/assets
/detections
/model
/samples
/scans
main.py
_class.py
After uploading the generated Linux application successfully, I programmed UNIHIKER to run inferences via the user interface (GUI) to assess water pollution levels based on the applied chemical water quality tests.
Then, I employed UNIHIKER to transfer the resulting image modified with the produced bounding boxes to a given Telegram bot via the HTTP-based Telegram Bot API.
As mentioned earlier, Nano ESP32 cannot convert the generated ultrasonic scan buffers to ultrasonic images after running the neural network model. Therefore, I employed UNIHIKER to communicate with the web application in order to obtain the latest model detection result (ultrasonic scan buffer passed to the neural network model and the detected air bubble class) and convert the received buffer to an ultrasonic image via the built-in OpenCV functions.
Also, similar to the modified resulting image, UNIHIKER can transfer the produced ultrasonic image to the given Telegram bot so as to inform the user of the latest aquatic ultrasonic scan and the presence of toxic underwater air bubbles.
Since the interconnected features for data collection and running advanced AI models are performed by two separate development boards (UNIHIKER and Nano ESP32), the described code snippets show the different aspects of the same code file. Please refer to the code files below to inspect all interconnected functions in detail.
📁 _class.py
Please refer to the _class.py file to inspect all interconnected functions.
⭐ Include the required modules.
⭐ In the init function:
⭐ Initialize the USB high-quality camera feed.
⭐ Define the required variables to establish the connection with the web application — Aquatic_Ultrasonic_Imaging.
⭐ Define the required frame settings.
⭐ Define the required configurations to run the Edge Impulse RetinaNet (NVIDIA TAO) object detection model.
⭐ Determine the required parameters to produce an ultrasonic image (20 x 20) from the received ultrasonic scan buffer.
⭐ Define the required parameters to transfer information to the given Telegram bot — @aquatic_ultrasonic_bot — via the HTTP-based Telegram Bot API.
⭐ Initiate the user interface (Tkinter-based GUI) and the GPIO interface of the microcontroller coprocessor via the integrated Python modules.
⭐ In the run_inference function:
⭐ Summarize the Edge Impulse RetinaNet model inference settings and print them on the shell.
⭐ Get the currently captured and modified image frame via the high-quality USB webcam.
⭐ After obtaining the modified frame, resize it (if necessary) and generate features from the obtained frame depending on the provided model settings.
⭐ Run an inference.
⭐ Obtain labels (classes) and bounding box measurements for each detected target object on the passed frame.
⭐ If the Edge Impulse model predicts a class successfully, get the imperative predicted label (class).
⭐ Modify the generated model resulting image with the produced bounding boxes (if any) and save the modified resulting image with the prediction date to the detections folder.
⭐ Then, notify the user of the model detection results on the interactive user interface.
⭐ Also, if configured, transfer the modified resulting image and the detected water pollution level (class) to the given Telegram bot as a push notification.
⭐ Finally, stop the running inference.
⭐ In the make_a_get_request function:
⭐ Depending on the passed command, make an HTTP GET request to the web application in order to perform these tasks:
⭐ Make the web application to generate a CSV file from the stored ultrasonic scan buffer samples (text files).
⭐ Obtain the latest neural network model detection result (ultrasonic scan buffer passed to the neural network model and the detected air bubble class) and convert the retrieved buffer (400 points) to an ultrasonic image (20 x 20).
⭐ Then, display the produced ultrasonic image with the detected air bubble class (label) for further inspection.
⭐ In the generate_ultrasonic_image function:
⭐ Obtain the template image — black square.
⭐ Split the received ultrasonic scan data buffer to obtain each data point individually.
⭐ For each data point, draw depth indicators, color-coded according to the given depth ranges, on the template image via the built-in OpenCV functions.
⭐ After concluding drawing color-coded indicators (20 x 20) on the template, save the modified image as the latest ultrasonic image to the scans folder — latest_ultrasonic_image.jpg.
⭐ In the telegram_send_data function:
⭐ Get the directory path of the root folder of this application (aquarium) on UNIHIKER.
⭐ Depending on the passed command (ultrasonic or water_test):
⭐ Make an HTTP POST request to the HTTP-based Telegram Bot API so as to transfer the produced ultrasonic image and the detected air bubble class to the given Telegram bot.
⭐ Make an HTTP POST request to the HTTP-based Telegram Bot API so as to transfer the resulting image modified with the produced bounding boxes and the detected water pollution level to the given Telegram bot.
⭐ After sending an image from the local storage successfully, notify the user via the interactive user interface.
📁 main.py
I employed the main.py file to initialize the user interface (GUI), the GPIO interface of the microcontroller coprocessor, and the camera feed simultaneously.
⭐ Define the aquarium object of the aquarium_func class.
⭐ Define and initialize separate Python threads to start the camera feed and the GPIO interface.
⭐ Enable the interactive user interface (GUI) designed with the built-in UNIHIKER modules consecutively.
My Edge Impulse object detection (NVIDIA TAO RetinaNet) model scans a captured image frame and predicts the possibilities of trained labels to recognize a target object on the given picture. The prediction result (score) represents the model's "confidence" that the detected target object corresponds to each of the three different labels (classes) [0 - 2], as shown in Step 11:
0 — dangerous
1 — polluted
2 — sterile
After setting up and running the optimized Edge Impulse object detection (RetinaNet) model on UNIHIKER:
🐠📡💧📊 As mentioned earlier, on the Water Quality Test section, the device lets the user generate a snapshot image to inspect the latest stored camera frame.
🐠📡💧📊 Then, the device waits for the user to decide on the resized camera frame to pass to the object detection model while generating and inspecting multiple snapshot images.
🐠📡💧📊 When the user clicks the Run Inference button, the device runs an inference with the object detection model to detect the water pollution level based on the applied chemical water quality tests.
🐠📡💧📊 After detecting a water pollution level (class) successfully, the device modifies the resulting image with the produced bounding boxes and saves the modified resulting image with the prediction date to the detections folder.
🐠📡💧📊 Then, if configured, the device transfers the latest saved resulting image and the detected class to the given Telegram bot by making an HTTP POST request to the HTTP-based Telegram Bot API.
🐠📡💧📊 After sending the push notification to the Telegram bot successfully, the device notifies the user via the onboard touchscreen.
🐠📡💧📊 Also, UNIHIKER prints progression notifications on the shell for debugging.
🐠📡💧📊 As mentioned earlier, the device employs the secondary RGB LED to inform the user of the device status while performing operations related to UNIHIKER. Since I was planning to place UNIHIKER on the back of the Squid PCB initially, I configured the micro:bit-compatible edge connector (Kitronik) pin connections reversed. Due to my aquarium's shape, I subsequently decided to position UNIHIKER to the front. Thus, solder the edge connector backward or flip UNIHIKER to enable the secondary RGB LED.
After applying four color-coded water quality tests and conducting diverse experiments, I obtained accurate and consistent prediction results for each water pollution level (class).
As mentioned repeatedly, Nano ESP32 cannot convert the produced ultrasonic scan buffers to ultrasonic images after running the neural network model. Thus, I provided additional features via the UNIHIKER user interface (GUI) so as to enable UNIHIKER to access the neural network model results via the web application.
🐠📡💧📊 If the user clicks the Aquatic Ultrasonic Scan button, the device opens the Aquatic Ultrasonic Scan section.
🐠📡💧📊 If the user clicks the Generate CSV button, the device makes an HTTP GET request to the web application, forcing the application to generate a pre-formatted CSV file (scan_data_items.csv) from all of the stored ultrasonic scan buffer samples (text files).
🐠📡💧📊 If the user clicks the Generate Image button:
🐠📡💧📊 The device makes an HTTP GET request to the web application so as to obtain the latest neural network model detection results, including the ultrasonic scan buffer passed to the neural network model and the detected air bubble class (label).
🐠📡💧📊 Then, the device splits the retrieved ultrasonic scan data buffer to obtain each data point individually.
🐠📡💧📊 The device draws depth indicators (20 x 20) on the passed template image (black square) via the built-in OpenCV functions.
🐠📡💧📊 While generating the aquatic ultrasonic image (20 x 20) from 400 data points, the device assigns colors to depth indicators according to the predefined depth ranges so as to visualize the given aquatic ultrasonic scan with thoroughly encoded pixels.
🐠📡💧📊 Since OpenCV functions are optimized for the BGR format, the color tuples should be passed accordingly.
15 <= p < 20 ➡ (255,255,255)
20 <= p < 25 ➡ (255,255,0)
5 <= p < 30 ➡ (255,0,0)
30 <= p < 35 ➡ (0,255,255)
p >= 35 ➡ (0,255,0)
🐠📡💧📊 After producing the aquatic ultrasonic image, the device saves the generated image to the scans folder — latest_ultrasonic_image.jpg.
🐠📡💧📊 Then, the device shows the latest aquatic ultrasonic image with the detected air bubble class (label) on the user interface (GUI) for further inspection.
🐠📡💧📊 If the user clicks the displayed aquatic ultrasonic image on the onboard touchscreen, the device transfers the aquatic ultrasonic image and the detected air bubble class to the given Telegram bot by making an HTTP POST request to the HTTP-based Telegram Bot API.
🐠📡💧📊 After sending the push notification to the Telegram bot successfully, the device notifies the user via the onboard touchscreen.
After conducting numerous experiments, UNIHIKER kept producing precise aquatic ultrasonic images to visualize aquatic ultrasonic scans manifesting noxious underwater air bubbles and inform the user via Telegram push notifications.
Aquarium Progression (Time-lapse) | AI-based Aquatic Ultrasonic Imaging & Chemical Water Testing
Toxic Underwater Air Bubbles | AI-based Aquatic Ultrasonic Imaging & Chemical Water Testing
Water Pollution Assessment | AI-based Aquatic Ultrasonic Imaging & Chemical Water Testing
By applying advanced AI-powered multi-algorithm detection methods to identify toxic underwater air bubbles and assess water pollution based on chemical water quality tests, we can achieve to:
🐠📡💧📊 employ ultrasonic imaging as a nondestructive inspection method to identify air gaps and assess water pollution consecutively to find any underlying conditions of accumulating harmful underwater waste,
🐠📡💧📊 prevent contaminants from impinging on aquatic life,
🐠📡💧📊 avert algal blooms, hypoxia (dead zones), and expanding barren lands,
🐠📡💧📊 detect the surge of toxic air bubbles to preclude potential environmental hazards,
🐠📡💧📊 assist commercial aquaculture facilities in protecting aquatic life acclimatized to the enclosed water bodies,
🐠📡💧📊 help retail fish farms increase their profit and survival rates.
[^1] Ocean pollution and marine debris, National Oceanic and Atmospheric Administration, https://www.noaa.gov/education/resource-collections/ocean-coasts/ocean-pollution.
[^2] Engler, Richard. (2012). The Complex Interaction between Marine Debris and Toxic Chemicals in the Ocean. Environmental science & technology. https://www.researchgate.net/publication/232609179_The_Complex_Interaction_between_Marine_Debris_and_Toxic_Chemicals_in_the_Ocean.
Huge thanks to ELECROW for sponsoring this project with their high-quality PCB manufacturing service.
Huge thanks to DFRobot for sponsoring these products:
Also, huge thanks to Anycubic for sponsoring a brand-new Anycubic Kobra 2 Max.
Although the URM15 is an exceptional ultrasonic ranging sensor providing an IP65 waterproof probe with a measuring range of 30 cm - 500 cm, it does not support direct data transmission and requires the standard Modbus-RTU protocol for stable communication. Thus, I utilized an RS485-to-UART signal adapter module (active-isolated) to obtain the generated ultrasonic distance measurements from the ultrasonic sensor and transfer them to Nano ESP32 via serial communication. Since Nano ESP32 cannot supply the stable 12V required for the URM15 ultrasonic sensor, I connected a USB buck-boost converter board to an external battery to obtain the demanding 12V to power the ultrasonic sensor through the signal adapter module.
Since the URM15 ultrasonic sensor supports the external temperature compensation to obviate the undulating ambient temperature effect, I utilized a DS18B20 waterproof temperature sensor to tune the ultrasonic sensor. As shown in the schematic below, before connecting the DS18B20 waterproof temperature sensor to Nano ESP32, I attached a 4.7K resistor as a pull-up from the DATA line to the VCC line of the sensor to generate accurate temperature measurements.
To detect the movement of the ultrasonic sensor probe underwater while collecting data, I utilized a 6-axis accelerometer supporting UART communication. Since I employed Nano ESP32 to pass the collected data buffers directly to the web application, I did not need to connect an external storage module such as a microSD card module.
To provide the user with a feature-rich interface, I connected an SSD1306 OLED display and four control buttons to Nano ESP32. I also added an RGB LED to inform the user of the device status while performing operations related to Nano ESP32.
Since UNIHIKER (RK3308 Arm 64-bit) is an outstandingly compact single-board computer providing a USB Type-A connector for peripherals, I was able to connect a high-quality USB webcam (PK-910H) to capture and save image samples effortlessly.
As explained earlier, UNIHIKER comes with a micro:bit-compatible connector to access the GPIO interface of the microcontroller coprocessor (RISC-V). I utilized the Kitronik edge connector to access the GPIO pins and adjust the secondary RGB LED to inform the user of the device status while performing operations related to UNIHIKER. In this regard, I was able to embed UNIHIKER into the Squid PCB as the centerpiece to build a single-unit device.
Before embedding UNIHIKER, I tested the micro:bit-compatible GPIO interface by utilizing a soldered Kitronik breakout board with the edge connector.
After completing soldering and adjustments, I attached all remaining components to the Squid PCB via the female headers.
First of all, open BotFather on Telegram and enter /start to view the available command list and instructions.
Enter the /newbot command to create a new bot. Register the Telegram bot name when BotFather requests a name. It will be displayed in contact details and elsewhere.
Then, register the bot username — tag. Usernames are 5-32 characters long and case insensitive but may only include Latin characters, numbers, and underscores. They must end in 'bot', e.g. 'tetris_bot' or 'TetrisBot'.
After completing the steps above, BotFather generates an authorization token for the new Telegram bot. The authorization token is a string, such as 123456:ABC-DEF1234ghIkl-zyx57W2v1u123ew11, that is required to authorize the bot and send requests to the HTTP-based Telegram Bot API. Keep the generated token secure and store it safely.
To change the profile picture of the Telegram bot, enter the /setuserpic command and upload a picture.
Finally, to add a description to the Telegram bot to be displayed whenever the user initiates a new chat, enter the /setdescription command and register the text description.
Make an HTTP GET request by utilizing the secured Telegram bot authorization token:
Then, initiate a new chat and send a message to the given Telegram bot. After refreshing the page, it should display the Update object list, including the chat id:
Install and set up the XAMPP development environment.
To install the required core, navigate to Tools ➡ Board ➡ Boards Manager and search for Arduino ESP32 Boards.
After installing the core, navigate to Tools ➡ Board ➡ ESP32 Arduino (Arduino) and select Arduino Nano ESP32.
Download and inspect the required libraries for the URM15 - 75KHZ ultrasonic sensor, the 6-axis accelerometer, the DS18B20 waterproof temperature sensor, and the SSD1306 OLED display:
To be able to display images (icons) on the SSD1306 OLED screen, first convert image files (PNG or JPG) to monochromatic bitmaps. Then, convert the generated bitmaps to compatible C data arrays. I decided to utilize LCD Assistant to create C data arrays.
After installing LCD Assistant, upload a monochromatic bitmap and select Vertical or Horizontal, depending on the screen type.
Then, save all the converted C data arrays to the logo.h file.
First of all, if you are a novice in programming with UNIHIKER, please visit the official tutorials and guidelines.
After connecting UNIHIKER to the computer via a USB Type-C cable, go to the home page of UNIHIKER's local web server via the default browser: 10.1.2.3.
Then, navigate to Network Settings and establish the Wi-Fi connection.
Since it is necessary to utilize the terminal to install Python modules, but UNIHIKER does not allow the user to access the terminal via its onboard interface, I needed to connect to UNIHIKER remotely via SSH.
To set up the SSH connection to access the terminal, I decided to utilize MobaXterm due to its advanced terminal configuration options.
After installing MobaXterm, connect to the UNIHIKER remote host with the default root user credentials:
After establishing the SSH connection via MobaXterm, to run Edge Impulse object detection models on UNIHIKER, install the Edge Impulse Linux Python SDK by utilizing the terminal.
To be able to utilize the Linux Python SDK, the Cython module is required on UNIHIKER. However, the latest Cython version is not compatible with the SDK. According to my experiments, the Cython 0.29.36 version works without a problem.
After downloading the correct Cython version, continue installing the Linux Python SDK.
Since I employed the integrated Python modules to control the GPIO pins of the microcontroller coprocessor, design a feature-rich user interface (GUI — Tkinter application), and display the interactive user interface on the built-in touchscreen, I did not need to install any additional Python libraries via MobaXterm.
Although MobaXterm lets the user access the root folder and run Python scripts, I decided to utilize Thonny Python IDE to program my Python scripts due to its simple debugger.
After installing the required modules via MobaXterm, open Thonny and connect UNIHIKER by applying the built-in Remote Python 3 (SSH) interpreter.
After changing the interpreter, use the default root user credentials to initiate the SSH connection on Thonny.
After establishing the SSH connection, Thonny lets the user access the root folder, create directories, upload files (assets), and run Python scripts.
Although Thonny does not let the user install or update Python modules, to inspect the available (pre-installed) libraries, go to Tools ➡ Manage packages...
To run code files manually without establishing the SSH connection, press the onboard Home button on UNIHIKER, go to Run Programs, and select a code file.
As explained earlier, I placed a lot of marimo moss balls at the bottom of the tank to increase the bottom surface area, provide underwater substrate gaps, and observe abundant noxious air bubbles while collecting ultrasonic scan data.
Thus, I managed to construct a valid data set for the neural network model.
Since UNIHIKER provides a built-in Python module tailored for displaying a Tkinter-based GUI on its onboard touchscreen (240 x 320), I was able to program the interactive user interface effortlessly.
Although the built-in module supports limited Tkinter features, I managed to create a multi-window user interface by shifting groups of GUI elements on and off-screen.
The interactive user interface (GUI) consists of three separate windows (layers):
First of all, to utilize the incorporated tools for advanced AI applications, sign up for Edge Impulse Enterprise.
Then, create a new project under your organization.
Open the Data acquisition page and go to the CSV Wizard section.
Upload a CSV file as an example to set the configuration settings (rules) for processing files via CSV Wizard.
Define the data structure (time-series data or not) of the records in the passed CSV file.
Select the column (data field) containing labels for the given data records.
Then, determine the columns containing values to split a data record into data items and click Finish wizard.
After setting the CSV rules, navigate to the Data acquisition page and click the Upload data icon.
Choose the data category (training or testing) and select a CSV file.
Then, click the Upload data button to upload samples labeled automatically with the values in the specified column (data field).
After navigating to the Create impulse page, select the Raw Data processing block and the Classification learning block. Then, click Save Impulse.
Before generating features for the neural network model, go to the Raw data page and click Save parameters.
After saving parameters, click Generate features to apply the Raw Data processing block to training samples.
Then, navigate to the Classifier page.
To change the default model classifier, click the Add an extra layer button and select the scikit-learn Ridge classifier employing L2 regularization.
After configuring the model classifier, click Start training.
To validate the trained model, go to the Model testing page and click Classify all.
To deploy the validated model as an Arduino library, navigate to the Deployment page and search for Arduino library.
Then, choose the default Unoptimized (float32) option since the Quantized (int8) optimization option is not available for the Ridge classifier.
Finally, click Build to download the model as an Arduino library.
First of all, to utilize the incorporated tools for advanced AI applications, sign up for Edge Impulse Enterprise.
Then, create a new project under your organization.
To be able to label image samples manually on Edge Impulse for object detection models, go to Dashboard ➡ Project info ➡ Labeling method and select Bounding boxes (object detection).
Navigate to the Data acquisition page and click the Upload data icon.
Then, choose the data category (training or testing), select image files, and click the Upload data button.
Go to Data acquisition ➡ Labeling queue. It shows all unlabeled items (training and testing) remaining in the given data set.
Finally, select an unlabeled item, drag bounding boxes around target objects, click the Save labels button, and repeat this process until all samples have at least one labeled target object.
Go to the Create impulse page and set image width and height parameters to 320. Then, select the resize mode parameter as Fit shortest axis so as to scale (resize) given training and testing image samples.
Select the Image preprocessing block and the Object Detection (Images) learning block. Finally, click Save Impulse.
Before generating features for the object detection model, go to the Image page and set the Color depth parameter as RGB. Then, click Save parameters.
After saving parameters, click Generate features to apply the Image preprocessing block to training image samples.
After generating features successfully, navigate to the Object detection page.
After configuring the model settings, click Start training.
To validate the trained model, go to the Model testing page and click Classify all.
To deploy the validated model as a Linux (AARCH64) application, navigate to the Deployment page and search for Linux (AARCH64).
Then, choose the Quantized (int8) optimization option to get the best performance possible while running the deployed model.
Finally, click Build to download the model as a Linux (AARCH64) application (.eim).
After downloading the model as an Arduino library in the ZIP file format, go to Sketch ➡ Include Library ➡ Add .ZIP Library...
Then, include the Aquatic_Air_Bubble_Detection_inferencing.h file to import the Edge Impulse neural network model with the Ridge classifier.
After downloading the generated Linux (AARCH64) application to the model folder and installing the required modules via SSH, make sure to change the file permissions via the terminal on MobaXterm to be able to execute the model file.
After switching the SSH connection to the Thonny IDE for programming in Python, create the required folder tree in the root directory of this detection device on UNIHIKER:
Since the HTTP-based Telegram Bot API accepts local files, I was able to send images from UNIHIKER local storage to the given Telegram bot without establishing an SSL connection to set a webhook.