
Description
Due to the ever-growing industrialization, forest degradation, and pollution, the delicate balance of ambient gases shifted. Thus, hazardous air pollutants impinge on the human respiratory system detrimentally, in addition to engendering climate change and poisoning wildlife. Even though governments realized that it was incumbent on them to act in order to prevent destructive air contaminants from pervading the ecosystem, we are too far away from obviating human-made air pollutants during the following decades. Therefore, it is still crucial to detect air pollutants to inform people with prescient warnings. Since some air pollutants can react with each other and spread very rapidly, precedence must be given to detecting highly reactive gases (air contaminants), such as ozone (O3) and nitrogen compounds (NOx, NOy). Thus, in this project, I decided to focus on ozone (O3) and nitrogen dioxide (NO2) concentrations, which denote dangerous air pollution. In ambient air, nitrogen oxides can occur from diverse combinations of oxygen and nitrogen. The higher combustion temperatures cause more nitric oxide reactions. In ambient conditions, nitric oxide is rapidly oxidized in air to form nitrogen dioxide by available oxidants, for instance, oxygen, ozone, and VOCs (volatile organic compounds). Hence, nitrogen dioxide (NO2) is widely known as a primary air pollutant (contaminant). Since road traffic is considered the principal outdoor source of nitrogen dioxide[^1], densely populated areas are most susceptible to its detrimental effects. Nitrogen dioxide causes a range of harmful effects on the respiratory system, for example, increased inflammation of the airways, reduced lung function, increased asthma attacks, and cardiovascular harm[^2]. Tropospheric, or ground-level ozone (O3), is formed by chemical reactions between oxides of nitrogen (NOx) and volatile organic compounds (VOCs). This chemical reaction is triggered by sunlight between the mentioned air pollutants emitted by cars, power plants, industrial boilers, refineries, and chemical plants[^3]. Depending on the level of exposure, ground-level ozone (O3) can have various effects on the respiratory system, for instance, coughing, sore throat, airway inflammation, increased frequency of asthma attacks, and increased lung infection risk. Some of these detrimental effects have been found even in healthy people, but symptoms can be more severe in people with lung diseases such as asthma[^4].
- Nitrogen dioxide concentration (PPM)
- Ozone concentration (PPB)
- Temperature (°C)
- Humidity (%)
- Wind speed
- Clean
- Risky
- Unhealthy













Step 1: Designing and printing a metallic air station case
Since I focused on building a budget-friendly and accessible air station that collects air quality data and runs a neural network model to inform the user of air pollution via a PHP web application, I decided to design a sturdy and compact metallic case allowing the user to access the SD card after logging data, place the air quality sensors, and adjust the OV7725 camera effortlessly. To avoid overexposure to dust and prevent loose wire connections, I added a sliding front cover with a handle to the case. Then, I designed a separate camera holder mountable to the left side of the case at four different angles. Also, I decided to inscribe air pollution indicators on the sliding front cover to highlight the imminent pollution risk. Since I needed to attach an anemometer to the case to collect wind speed data, I decided to design a semi-convex structure for the case. This unique shape also serves as a wind deflector that protects the air quality sensors from potential wind damage. I designed the metallic air station case, its sliding front cover, and the mountable camera holder in Autodesk Fusion 360. You can download their STL files below.










- eSilk Copper
- eSilk Bronze




.jpg?fit=max&auto=format&n=cTLnQohdrJO-nQ6f&q=85&s=01191962394682ffe276b6da510bff50)


.jpg?fit=max&auto=format&n=cTLnQohdrJO-nQ6f&q=85&s=e71a3fad347e0aa548cef081f778a92a)
.jpg?fit=max&auto=format&n=cTLnQohdrJO-nQ6f&q=85&s=66ac3e46eccc71a81854f15ca8173378)

.png?fit=max&auto=format&n=cTLnQohdrJO-nQ6f&q=85&s=3d876d47e3e7c2c46f0a1496f5bd04e8)



.png?fit=max&auto=format&n=cTLnQohdrJO-nQ6f&q=85&s=f9b9c809f0fad401bda763991b06afdf)
Step 1.1: Improving print quality and speed with the Creality Sonic Pad
Since I wanted to improve my print quality and speed with Klipper, I decided to upgrade my Creality CR-200B 3D Printer with the Creality Sonic Pad. Creality Sonic Pad is a beginner-friendly device to control almost any FDM 3D printer on the market with the Klipper firmware. Since the Sonic Pad uses precision-oriented algorithms, it provides remarkable results with higher printing speeds. The built-in input shaper function mitigates oscillation during high-speed printing and smooths ringing to maintain high model quality. Also, it supports G-code model preview. Although the Sonic Pad is pre-configured for some Creality printers, it does not support the CR-200B officially yet. Therefore, I needed to add the CR-200B as a user-defined printer to the Sonic Pad. Since the Sonic Pad needs unsupported printers to be flashed with the self-compiled Klipper firmware before connection, I flashed my CR-200B with the required Klipper firmware settings via FluiddPI by following this YouTube tutorial. If you do not know how to write a printer configuration file for Klipper, you can download the stock CR-200B configuration file from here. :hash: After flashing the CR-200B with the Klipper firmware, copy the configuration file (printer.cfg) to a USB drive and connect the drive to the Sonic Pad. :hash: After setting up the Sonic Pad, select Other models. Then, load the printer.cfg file..jpg?fit=max&auto=format&n=cTLnQohdrJO-nQ6f&q=85&s=720747134ef7b469f9a85eab19e3535a)

.jpg?fit=max&auto=format&n=cTLnQohdrJO-nQ6f&q=85&s=e7f3db3d79b926d2642d668e68cc55e6)


.jpg?fit=max&auto=format&n=cTLnQohdrJO-nQ6f&q=85&s=6f4cb8b6e3136303ede7bba8117507b5)

Step 1.2: Assembling the case and making connections & adjustments




















Step 2: Developing a web application displaying real-time database updates in PHP, JavaScript, CSS, and MySQL
To provide an exceptional user experience for this AIoT air station, I developed a full-fledged web application from scratch in PHP, HTML, JavaScript, CSS, and MySQL. This web application obtains the collected air quality data, the detected air pollution level (class) by the neural network model, and the captured surveillance footage from FireBeetle ESP32 via an HTTP POST request. After saving the received information to the MySQL database table for further inspection, the web application converts the received raw image data to a JPG file via a Python script. Then, the web application updates itself automatically to show the latest received information and surveillance footage. Also, the application displays all stored air quality data with model detection results in descending order and allows the user to inspect previous surveillance footage. As shown below, the web application consists of three folders and seven code files:- /assets
- — background.jpg
- — class.php
- — icon.png
- — index.css
- — index.js
- /env_notifications
- — /images
- — bmp_converter.py
- index.php
- show_records.php
- update_data.php









Step 2.1: Converting the raw images transferred by FireBeetle ESP32 via POST requests to JPG files
Since the FireBeetle media board can only generate raw image data due to its built-in OV7725 camera, I needed to convert the generated raw image data to readable image files so as to display them on the web application interface as surveillance footage. Since FireBeetle ESP32 cannot convert the generated raw image data due to memory allocation issues, I decided to convert the captured raw image data to a JPG file via the web application. Even though PHP can handle converting raw image data to different image file formats, converting images in PHP causes bad request issues since the web application receives raw image data from FireBeetle ESP32 via HTTP POST requests. Hence, I decided to utilize Python to create JPG files from raw image data since Python provides built-in modules for image conversion in seconds. By employing the terminal on LattePanda 3 Delta, the web application executes the bmp_converter.py file directly to convert images. 📁 bmp_converter.py ⭐ Include the required modules.

Step 2.2: Setting and running the web application on LattePanda 3 Delta 864
Since I have got a test sample of the brand-new LattePanda 3 Delta 864, I decided to host my web application on LattePanda 3 Delta. Therefore, I needed to set up a LAMP web server. LattePanda 3 Delta is a pocket-sized hackable computer that provides ultra performance with the Intel 11th-generation Celeron N5105 processor. Plausibly, LattePanda 3 Delta can run the XAMPP application. So, it is effortless to create a server with a MariaDB database on LattePanda 3 Delta.
.png?fit=max&auto=format&n=A2Bwlb7Fn9Ob3CFt&q=85&s=d68a8ea5e2e92e570a91f2a804704229)




Step 2.3: Tracking real-time model detection results and displaying surveillance footage captured by FireBeetle ESP32
After setting the web application on LattePanda 3 Delta 864: 🎈⚠️📲 The web application (update_data.php) saves the information transferred by FireBeetle ESP32 via an HTTP POST request with URL query parameters to the given MySQL database table. /update_data.php?no2=0.15&o3=25&temperature=25.20&humidity=65.50&wind_speed=3&model_result=Clean





- Clean ➜ Green
- Risky ➜ Midnight Green
- Unhealthy ➜ Red



Step 3: Setting up FireBeetle ESP32 on Arduino IDE
Before proceeding with the following steps, I needed to set up FireBeetle ESP32 on the Arduino IDE and install the required libraries for this project. Although DFRobot provides a specific driver package and library for FireBeetle ESP32 and its media board, I encountered some issues while running different sensor libraries in combination with the provided media board library. Therefore, I decided to utilize the latest release of the official Arduino-ESP32 package and modify its esp_camera library settings to make it compatible with the FireBeetle media board. :hash: To add the Arduino-ESP32 board package to the Arduino IDE, navigate to File ➡ Preferences and paste the URL below under Additional Boards Manager URLs. https://raw.githubusercontent.com/espressif/arduino-esp32/gh-pages/package\_esp32\_index.json
.png?fit=max&auto=format&n=cTLnQohdrJO-nQ6f&q=85&s=1ace948b94f01c65cde8625c51a64fc8)

.png?fit=max&auto=format&n=577XzA-QE9Zpi0WI&q=85&s=b5699109dd5c343fe14c347dd0ee5a75)

Step 3.1: Displaying images on the SH1106 OLED screen
To display images (black and white) on the SH1106 OLED screen successfully, I needed to create monochromatic bitmaps from PNG or JPG files and convert those bitmaps to data arrays. :hash: First of all, download the LCD Assistant. :hash: Then, upload a monochromatic bitmap and select Vertical or Horizontal depending on the screen type. :hash: Convert the image (bitmap) and save the output (data array). :hash: Finally, add the data array to the code and print it on the screen.

Step 4: Collecting air quality data and communicating with FireBeetle ESP32 via serial communication w/ Arduino Mega
After setting up FireBeetle ESP32 and installing the required libraries, I programmed Arduino Mega to collect air quality data and transmit the collected data to FireBeetle ESP32 via serial communication. As explained in the previous steps, I encountered Arduino library incompatibilities and memory allocation issues when I connected the sensors directly to FireBeetle ESP32.- Nitrogen dioxide concentration (PPM)
- Ozone concentration (PPB)
- Temperature (°C)
- Humidity (%)
- Wind speed
- Control Button (A) ➡ Clean
- Control Button (B) ➡ Risky
- Control Button (C) ➡ Unhealthy






Step 4.1: Logging the transmitted air quality data in a CSV file on the SD card w/ FireBeetle ESP32
After uploading and running the code for collecting air quality data and transferring the collected data to FireBeetle ESP32 via serial communication: 🎈⚠️📲 If the electrochemical gas sensors work accurately, the air station turns the RGB LED to blue and waits until the electrochemical gas sensors heat (warm-up) for 3 minutes.
- Nitrogen dioxide concentration (PPM)
- Ozone concentration (PPB)
- Temperature (°C)
- Humidity (%)
- Wind speed









Step 5: Building a neural network model with Edge Impulse
In this project, I needed to utilize accurate air pollution levels for each data record composed of the air quality data I collected. Therefore, I needed to obtain local Air Quality Index (AQI) estimations for my region. Since IQAir calculates the Air Quality Index (AQI) estimations based on satellite PM2.5 data for locations lacking ground-based air monitoring stations and provides hourly AQI estimations with air quality levels by location, I decided to employ IQAir to obtain local AQI estimations. Before collecting and storing the air quality data, I checked IQAir for the AQI estimation of my region. Then, I derived an air pollution class (level) from the AQI estimation provided by IQAir in order to assign a label empirically to my samples (data records).- Clean
- Risky
- Unhealthy


- Data Scaling (Normalizing)
- Data Preprocessing
Step 5.1: Preprocessing and scaling the data set to create formatted samples for Edge Impulse
As long as the CSV file includes a header defining data fields, Edge Impulse can distinguish data records as individual samples in different data structures thanks to its CSV Wizard while adding existing data to an Edge Impulse project. Therefore, there is no need for preprocessing single CSV file data sets even if the data type is not time series. Since Edge Impulse can infer the uploaded sample’s label from its file name, I employed FireBeetle ESP32 to create a new CSV file for each data record and name the generated files by combining the given air pollution level and the sample number incremented by 1 for each class (label) automatically.- Clean.training.sample_1.csv
- Clean.training.sample_2.csv
- Risky.training.sample_1.csv
- Risky.training.sample_2.csv
- Unhealthy.training.sample_1.csv
- Unhealthy.training.sample_2.csv

Step 5.2: Uploading formatted samples to Edge Impulse
After generating training and testing samples successfully, I uploaded them to my project on Edge Impulse. :hash: First of all, sign up for Edge Impulse and create a new project.














Step 5.3: Training the model on air pollution levels
After uploading my training and testing samples successfully, I designed an impulse and trained it on air pollution levels (classes). An impulse is a custom neural network model in Edge Impulse. I created my impulse by employing the Raw Data processing block and the Classification learning block. The Raw Data processing block generate windows from data samples without any specific signal processing. The Classification learning block represents a Keras neural network model. Also, it lets the user change the model settings, architecture, and layers. :hash: Go to the Create impulse page. Then, select the Raw Data processing block and the Classification learning block. Finally, click Save Impulse.





- Number of training cycles ➡ 50
- Learning rate ➡ 0.005
- Validation set size ➡ 15
- Dense layer (20 neurons)
- Dense layer (10 neurons)

Step 5.4: Evaluating the model accuracy and deploying the model
After building and training my neural network model, I tested its accuracy and validity by utilizing testing samples. The evaluated accuracy of the model is 97.78%. :hash: To validate the trained model, go to the Model testing page and click Classify all.





Step 6: Setting up the Edge Impulse model on FireBeetle ESP32
After building, training, and deploying my model as an Arduino library on Edge Impulse, I needed to upload the generated Arduino library on FireBeetle ESP32 to run the model directly so as to create an easy-to-use and capable air station operating with minimal latency, memory usage, and power consumption. Since Edge Impulse optimizes and formats signal processing, configuration, and learning blocks into a single package while deploying models as Arduino libraries, I was able to import my model effortlessly to run inferences. :hash: After downloading the model as an Arduino library in the ZIP file format, go to Sketch ➡ Include Library ➡ Add .ZIP Library… :hash: Then, include the AI-assisted_Air_Quality_Monitor_inferencing.h file to import the Edge Impulse neural network model.- User Button (Built-in) ➡ Run Inference
- Clean
- Risky
- Unhealthy









Step 7: Running the model on FireBeetle ESP32 to forecast air pollution levels and transferring the model results w/ surveillance footage via POST requests
My Edge Impulse neural network model predicts possibilities of labels (air pollution classes) for the given features buffer as an array of 3 numbers. They represent the model’s “confidence” that the given features buffer corresponds to each of the three different air pollution levels (classes) [0 - 2], as shown in Step 5:- 0 — Clean
- 1 — Risky
- 2 — Unhealthy









Videos and Conclusion
Data collection | AI-assisted Air Quality Monitor w/ IoT Surveillance Experimenting with the model | AI-assisted Air Quality Monitor w/ IoT SurveillanceFurther Discussions
By applying neural network models trained on air quality data in detecting air pollution levels, we can achieve to: 🎈⚠️📲 prevent human-made air pollutants from harming the respiratory system, 🎈⚠️📲 reduce the risk of increased asthma attacks and cardiovascular harm, 🎈⚠️📲 protect people with lung diseases from severe symptoms of air pollution, 🎈⚠️📲 provide prescient warnings regarding a surge in photochemical oxidant reactions and transmission rates.