LogoLogo
HomeDocsAPI & SDKsProjectsForumStudio
  • Welcome
    • Featured Machine Learning Projects
      • Getting Started with the Edge Impulse Nvidia TAO Pipeline - Renesas EK-RA8D1
      • Smart City Traffic Analysis - NVIDIA TAO + Jetson Orin Nano
      • ROS 2 Pick and Place System - Arduino Braccio++ Robotic Arm and Luxonis OAK-D
      • Optimize a cloud-based Visual Anomaly Detection Model for Edge Deployments
      • Rooftop Ice Detection with Things Network Visualization - Nvidia Omniverse Replicator
      • Surgery Inventory Object Detection - Synthetic Data - Nvidia Omniverse Replicator
      • NVIDIA Omniverse - Synthetic Data Generation For Edge Impulse Projects
      • Community Guide – Using Edge Impulse with Nvidia DeepStream
      • Computer Vision Object Counting - Avnet RZBoard V2L
      • Gesture Appliances Control with Pose Detection - BrainChip AKD1000
      • Counting for Inspection and Quality Control - Nvidia Jetson Nano (TensorRT)
      • High-resolution, High-speed Object Counting - Nvidia Jetson Nano (TensorRT)
    • Prototype and Concept Projects
      • Renesas CK-RA6M5 Cloud Kit - Getting Started with Machine Learning
      • TI CC1352P Launchpad - Getting Started with Machine Learning
      • OpenMV Cam RT1062 - Getting Started with Machine Learning
      • Getting Started with Edge Impulse Experiments
  • Computer Vision Projects
    • Workplace Organizer - Nvidia Jetson Nano
    • Recyclable Materials Sorter - Nvidia Jetson Nano
    • Analog Meter Reading - Arduino Nicla Vision
    • Creating Synthetic Data with Nvidia Omniverse Replicator
    • SonicSight AR - Sound Classification with Feedback on an Augmented Reality Display
    • Traffic Monitoring - Brainchip Akida
    • Multi-camera Video Stream Inference - Brainchip Akida
    • Industrial Inspection Line - Brainchip Akida
    • X-Ray Classification and Analysis - Brainchip Akida
    • Inventory Stock Tracker - FOMO - BrainChip Akida
    • Container Counting - Arduino Nicla Vision
    • Smart Smoke Alarm - Arduino Nano 33
    • Shield Bot Autonomous Security Robot
    • Cyclist Blind Spot Detection - Himax WE-I Plus
    • IV Drip Fluid-Level Monitoring - Arduino Portenta H7
    • Worker PPE Safety Monitoring - Nvidia Jetson Nano
    • Delivered Package Detection - ESP-EYE
    • Bean Leaf Disease Classification - Sony Spresense
    • Oil Tank Measurement Using Computer Vision - Sony Spresense
    • Object Counting for Smart Industries - Raspberry Pi
    • Smart Cashier with FOMO - Raspberry Pi
    • PCB Defect Detection with Computer Vision - Raspberry Pi
    • Bicycle Counting - Sony Spresense
    • Counting Eggs with Computer Vision - OpenMV Cam H7
    • Elevator Passenger Counting - Arduino Nicla Vision
    • ESD Protection using Computer Vision - Seeed ReComputer
    • Solar Panel Defect Detection - Arduino Portenta H7
    • Label Defect Detection - Raspberry Pi
    • Dials and Knob Monitoring with Computer Vision - Raspberry Pi
    • Digital Character Recognition on Electric Meter System - OpenMV Cam H7
    • Corrosion Detection with Computer Vision - Seeed reTerminal
    • Inventory Management with Computer Vision - Raspberry Pi
    • Monitoring Retail Checkout Lines with Computer Vision - Renesas RZ/V2L
    • Counting Retail Inventory with Computer Vision - Renesas RZ/V2L
    • Pose Detection - Renesas RZ/V2L
    • Product Quality Inspection - Renesas RZ/V2L
    • Smart Grocery Cart Using Computer Vision - OpenMV Cam H7
    • Driver Drowsiness Detection With FOMO - Arduino Nicla Vision
    • Gastroscopic Image Processing - OpenMV Cam H7
    • Pharmaceutical Pill Quality Control and Defect Detection
    • Deter Shoplifting with Computer Vision - Texas Instruments TDA4VM
    • Smart Factory Prototype - Texas Instruments TDA4VM
    • Correct Posture Detection and Enforcement - Texas Instruments TDA4VM
    • Visual Anomaly Detection with FOMO-AD - Texas Instruments TDA4VM
    • Surface Crack Detection and Localization - Texas Instruments TDA4VM
    • Surface Crack Detection - Seeed reTerminal
    • Retail Image Classification - Nvidia Jetson Nano
    • SiLabs xG24 Plus Arducam - Sorting Objects with Computer Vision and Robotics - Part 1
    • SiLabs xG24 Plus Arducam - Sorting Objects with Computer Vision and Robotics - Part 2
    • Object Detection and Visualization - Seeed Grove Vision AI Module
    • Bike Rearview Radar - Raspberry Pi
    • Build a Self-Driving RC Vehicle - Arduino Portenta H7 and Computer Vision
    • "Bring Your Own Model" Image Classifier for Wound Identification
    • Acute Lymphoblastic Leukemia Classifier - Nvidia Jetson Nano
    • Hardhat Detection in Industrial Settings - Alif Ensemble E7
    • Motorcycle Helmet Identification and Traffic Light Control - Texas Instruments AM62A
    • Import a Pretrained Model with "Bring Your Own Model" - Texas Instruments AM62A
    • Product Inspection with Visual Anomaly Detection - FOMO-AD - Sony Spresense
    • Visual Anomaly Detection in Fabric using FOMO-AD - Raspberry Pi 5
    • Car Detection and Tracking System for Toll Plazas - Raspberry Pi AI Kit
    • Visual Anomaly Detection - Seeed Grove Vision AI Module V2
    • Object Counting with FOMO - OpenMV Cam RT1062
    • Visitor Heatmap with FOMO Object Detection - Jetson Orin Nano
    • Vehicle Security Camera - Arduino Portenta H7
  • Audio Projects
    • Occupancy Sensing - SiLabs xG24
    • Smart Appliance Control Using Voice Commands - Nordic Thingy:53
    • Glass Window Break Detection - Nordic Thingy:53
    • Illegal Logging Detection - Nordic Thingy:53
    • Illegal Logging Detection - Syntiant TinyML
    • Wearable Cough Sensor and Monitoring - Arduino Nano 33 BLE Sense
    • Collect Data for Keyword Spotting - Raspberry Pi Pico
    • Voice-Activated LED Strip - Raspberry Pi Pico
    • Snoring Detection on a Smart Phone
    • Gunshot Audio Classification - Arduino Nano 33 + Portenta H7
    • AI-Powered Patient Assistance - Arduino Nano 33 BLE Sense
    • Acoustic Pipe Leakage Detection - Arduino Portenta H7
    • Location Identification using Sound - Syntiant TinyML
    • Environmental Noise Classification - Nordic Thingy:53
    • Running Faucet Detection - Seeed XIAO Sense + Blues Cellular
    • Vandalism Detection via Audio Classification - Arduino Nano 33 BLE Sense
    • Predictive Maintenance Using Audio Classification - Arduino Nano 33 BLE Sense
    • Porting an Audio Project from the SiLabs Thunderboard Sense 2 to xG24
    • Environmental Audio Monitoring Wearable - Syntiant TinyML - Part 1
    • Environmental Audio Monitoring Wearable - Syntiant TinyML - Part 2
    • Keyword Spotting - Nordic Thingy:53
    • Detecting Worker Accidents with Audio Classification - Syntiant TinyML
    • Snoring Detection with Syntiant NDP120 Neural Decision Processor - Arduino Nicla Voice
    • Recognize Voice Commands with the Particle Photon 2
    • Voice Controlled Power Plug with Syntiant NDP120 (Nicla Voice)
    • Determining Compressor State with Audio Classification - Avnet RaSynBoard
    • Developing a Voice-Activated Product with Edge Impulse's Synthetic Data Pipeline
    • Enhancing Worker Safety using Synthetic Audio to Create a Dog Bark Classifier
  • Predictive Maintenance and Defect Detection Projects
    • Predictive Maintenance - Nordic Thingy:91
    • Brushless DC Motor Anomaly Detection
    • Industrial Compressor Predictive Maintenance - Nordic Thingy:53
    • Anticipate Power Outages with Machine Learning - Arduino Nano 33 BLE Sense
    • Faulty Lithium-Ion Cell Identification in Battery Packs - Seeed Wio Terminal
    • Weight Scale Predictive Maintenance - Arduino Nano 33 BLE Sense
    • Fluid Leak Detection With a Flowmeter and AI - Seeed Wio Terminal
    • Pipeline Clog Detection with a Flowmeter and AI - Seeed Wio Terminal
    • Refrigerator Predictive Maintenance - Arduino Nano 33 BLE Sense
    • Motor Pump Predictive Maintenance - Infineon PSoC 6 WiFi-BT Pioneer Kit + CN0549
    • BrickML Demo Project - 3D Printer Anomaly Detection
    • Condition Monitoring - Syntiant TinyML Board
    • Predictive Maintenance - Commercial Printer - Sony Spresense + CommonSense
    • Vibration Classification with BrainChip's Akida
    • AI-driven Audio and Thermal HVAC Monitoring - SeeedStudio XIAO ESP32
  • Accelerometer and Activity Projects
    • Arduino x K-Way - Outdoor Activity Tracker
    • Arduino x K-Way - Gesture Recognition for Hiking
    • Arduino x K-Way - TinyML Fall Detection
    • Posture Detection for Worker Safety - SiLabs Thunderboard Sense 2
    • Hand Gesture Recognition - OpenMV Cam H7
    • Arduin-Row, a TinyML Rowing Machine Coach - Arduino Nicla Sense ME
    • Fall Detection using a Transformer Model – Arduino Giga R1 WiFi
    • Bluetooth Fall Detection - Arduino Nano 33 BLE Sense
    • Monitor Packages During Transit with AI - Arduino Nano 33 BLE Sense
    • Smart Baby Swing - Arduino Portenta H7
    • Warehouse Shipment Monitoring - SiLabs Thunderboard Sense 2
    • Gesture Recognition - Bangle.js Smartwatch
    • Gesture Recognition for Patient Communication - SiLabs Thunderboard Sense 2
    • Hospital Bed Occupancy Detection - Arduino Nano 33 BLE Sense
    • Porting a Posture Detection Project from the SiLabs Thunderboard Sense 2 to xG24
    • Porting a Gesture Recognition Project from the SiLabs Thunderboard Sense 2 to xG24
    • Continuous Gait Monitor (Anomaly Detection) - Nordic Thingy:53
    • Classifying Exercise Activities on a BangleJS Smartwatch
  • Air Quality and Environmental Projects
    • Arduino x K-Way - Environmental Asthma Risk Assessment
    • Gas Detection in the Oil and Gas Industry - Nordic Thingy:91
    • Smart HVAC System with a Sony Spresense
    • Smart HVAC System with an Arduino Nicla Vision
    • Indoor CO2 Level Estimation - Arduino Portenta H7
    • Harmful Gases Detection - Arduino Nano 33 BLE Sense
    • Fire Detection Using Sensor Fusion and TinyML - Arduino Nano 33 BLE Sense
    • AI-Assisted Monitoring of Dairy Manufacturing Conditions - Seeed XIAO ESP32C3
    • AI-Assisted Air Quality Monitoring - DFRobot Firebeetle ESP32
    • Air Quality Monitoring with Sipeed Longan Nano - RISC-V Gigadevice
    • Methane Monitoring in Mines - Silabs xG24 Dev Kit
    • Smart Building Ventilation with Environmental Sensor Fusion
    • Sensor Data Fusion with Spresense and CommonSense
    • Water Pollution Detection - Arduino Nano ESP32 + Ultrasonic Scan
    • Fire Detection Using Sensor Fusion - Arduino Nano 33 BLE Sense
  • Novel Sensor Projects
    • 8x8 ToF Gesture Classification - Arduino RP2040 Connect
    • Food Irradiation Dose Detection - DFRobot Beetle ESP32C3
    • Applying EEG Data to Machine Learning, Part 1
    • Applying EEG Data to Machine Learning, Part 2
    • Applying EEG Data to Machine Learning, Part 3
    • Liquid Classification with TinyML - Seeed Wio Terminal + TDS Sensor
    • AI-Assisted Pipeline Diagnostics and Inspection with mmWave Radar
    • Soil Quality Detection Using AI and LoRaWAN - Seeed Sensecap A1101
    • Smart Diaper Prototype - Arduino Nicla Sense ME
    • DIY Smart Glove with Flex Sensors
    • EdgeML Energy Monitoring - Particle Photon 2
    • Wearable for Monitoring Worker Stress using HR/HRV DSP Block - Arduino Portenta
  • Software Integration Demos
    • Azure Machine Learning with Kubernetes Compute and Edge Impulse
    • ROS2 + Edge Impulse, Part 1: Pub/Sub Node in Python
    • ROS2 + Edge Impulse, Part 2: MicroROS
    • Using Hugging Face Datasets in Edge Impulse
    • Using Hugging Face Image Classification Datasets with Edge Impulse
    • Edge Impulse API Usage Sample Application - Jetson Nano Trainer
    • MLOps with Edge Impulse and Azure IoT Edge
    • A Federated Approach to Train and Deploy Machine Learning Models
    • DIY Model Weight Update for Continuous AI Deployments
    • Automate the CI/CD Pipeline of your Models with Edge Impulse and GitHub Actions
    • Deploying Edge Impulse Models on ZEDEDA Cloud Devices
Powered by GitBook
On this page
  • Introduction
  • Prerequisites
  • Circuit
  • Data Acquisition and Model Training
  • Visual Studio Code
  • Windows Installation
  • Linux (Ubuntu) Installation
  • Demo Video
  • Conclusion
  • Contact

Was this helpful?

Edit on GitHub
Export as PDF
  1. Audio Projects

Recognize Voice Commands with the Particle Photon 2

Use a Particle Photon 2 to turn a device on or off by listening for a keyword or audio event, and opening or closing a Relay accordingly.

PreviousSnoring Detection with Syntiant NDP120 Neural Decision Processor - Arduino Nicla VoiceNextVoice Controlled Power Plug with Syntiant NDP120 (Nicla Voice)

Last updated 4 months ago

Was this helpful?

Created By: Roni Bandini

Public Project Link:

GitHub Repository:

Introduction

In industrial settings, workers often find themselves unable to operate machinery manually due to concurrent tasks demanding the use of their hands. Machine Learning (ML) has emerged as a transformative solution, enabling compact devices to comprehend and respond to vocal instructions, thereby initiating or halting machines as needed.

Prerequisites

The Photon 2 is an interesting, high quality board made by Particle. It has 5 GHz WiFi and Bluetooth (BLE) 5, an ARM Cortex-M33 CPU running at 200 MHz, 2 MB of storage for user applications, 3 MB of RAM available to user applications, and a 2 MB flash file system.

The board size is 1.5 x 0.78 inches (5 x 2cm) and it comes with pre-soldered, labeled male headers. It also has 2 buttons (Reset and Mode), one RGB led, a LIPO charger with a JST-PH port, and a 10-pin micro JTAG connector for SWD (Serial Wire Debug).

Circuit

The PDA MIC comes without header pins soldered, so we need to solder 5 headers and discard the sixth one (if using a 6-pin header as I've done here). After that we will connect the PDA mic to the board using jumper cables.

The PDA microphone connections should be:

  • PDA GND to Photon 2 GND

  • PDA 3v to Photon 2 3v3

  • PDA CLK to Photon 2 A0

  • PDA DAT to Photon 2 A1

  • The PDA Mic SEL pin is not used for this scenario.

The Relay module should be connected as follows:

  • Relay GND to Photon 2 GND

  • Relay VCC to Photon 2 VCC

  • Relay Signal to Photon 2 D3

Since the Photon 2 has one GND and one 3v3 pin on the board, the Microphone GND and Relay GND should be connected together, and also Microphone 3v and Relay 3v. You can do that using the protoboard and Male-Female jumper cables from the Edge AI kit, or you can also cut and solder the cables.

Data Acquisition and Model Training

We will use the computer to record data samples for 2 labels: machine and background. Go to Collect data, Connect a Device, and Connect to your computer.

Enable your microphone permissions if necessary, and repeat the word "machine" several times, leaving a few seconds in between each. For background sound, just record any continuous background sound.

Wait a few seconds to allow the recording to be uploaded to Edge Impulse before going to the next step.

Now we are going to Split the samples. Click on the three vertical dots next to the recording and select "Split sample". Leave the length set to the default, 1000ms. Repeat this process for both labels.

Now we will design an Impulse. In the Edge Impulse Studio, go to Create impulse, set the Window size to 1000ms, the Window increase to 500ms, and add the 'Audio MFCC' Processing Block, which is perfect for voices. Then add 'Classification (Keras)' as the Learning Block. Now click Save impulse.

Next we will go to MFCC parameters. This page allows us to configure the MFCC block, and lets us preview how the data will be transformed. The MFCC block transforms a window of audio into a table of data where each row represents a range of frequencies, and each column represents a span of time. The value contained within each cell reflects the amplitude of its associated range of frequencies during that span of time.

We will leave the default values, which are pre-configured according to the data. Then we click on Generate Features.

Now we go to Neural Network configuration. Click on Classifier, and the settings can be left on their default values, then scroll down to click Start Training. In this case, we have got a 100% accuracy, which is uncommon, but the data and recordings are quite different so the neural network is able to segment them quite well.

You can upload new samples and test them with Model Testing feature.

The final step is to deploy a library to the Particle Photon 2. We will click Deployment, begin to type the word Particle in the search box, then select the Particle Library and click Build.

Visual Studio Code

The main difference about working with the Photon 2 is that the Arduino IDE is not used to upload the code and libraries. Instead, Microsoft Visual Studio Code is used, and there are several setup steps to carefully follow.

Windows Installation

Unzip the Particle library exported from Edge Impulse Studio.

In VS Code, press Ctrl+Shift+P to bring up the Particle menu and select the "Particle: Import Project" feature to choose the library. The first time, VSCode will download dependencies.

In Visual Studio, at the lower right there will be a button to open the Properties file. Navigate to the unzipped folder, select project.properties and select "Yes, I trust confirmation".

The src/main.cpp code included in the zip file detects the trained keyword, and prints predictions over serial console. So we need to add some code to control the Relay.

We will open src/main.cpp and add the following:

// Before Setup function

float myLimit=0.8; // the limit to detect the word
int machineOn=0; // machine current state

and

// Inside Setup function

pinMode(3, OUTPUT); // set pin 3 to control the Relay Module

Then, inside the loop we will add this code to control the Relay. In this case, the label contains the keyword "muted" instead of "machine", as I was exploring audio output. If you are going to use another label, just change the word "muted" to the keyword contained in your own label.

 for (size_t ix = 0; ix < EI_CLASSIFIER_LABEL_COUNT; ix++) {
 	if (strstr(result.classification[ix].label, "muted") && result.classification[ix].value > detectionLimit) {
 		if (machineOn==1){
 			ei_printf(" - Turning the machine off");
 			digitalWrite(3, HIGH);
 			machineOn=0;
 		}
 		else{
 			ei_printf(" - Turning the machine on");
 			digitalWrite(3, LOW);
 			machineOn=1;
 		}
 	} 

Now press Ctrl+Shift+P to bring up the Particle menu again, and this time select "Particle: Configure Project for Device". Select deviceOS@5.5.0, board P2, and ESC for device name.

Note: if you get a Microphone library not found, press Ctrl+Shift+P, install the 'Microphone_PDM@0.0.2' library.

Finally, bring up the Particle menu again with Ctrl+Shift+P and select "Particle: Flash application and device, local" (For Windows, it will take around 5 minutes to flash).

If you obtain an "Argument list too long" error during flashing, Particle Support recommends using Docker instead to build:

Linux (Ubuntu) Installation

  1. Open a Terminal window and execute: sudo apt-get install libarchive-zip-perl (this step is to avoid an error where crc32 tool is not found)

  2. Click Ctrl+P: Extension Install and locate and install particle.particle-vscode-pack and press Enter.

  3. Login with Particle credentials

  4. Create a Project

  5. Import the unzipped folder. Accept "Trust all"

  6. Now click Ctrl+Shift+P, choose "Particle: Configure Project for Device", and choose deviceOS@5.3.2, board P2

  7. Bring up the Particle menu again with Ctrl+Shift+P and choose "Particle: Flash Application and Device, local"

Demo Video

Conclusion

Machine Learning not only facilitates the recognition of voice commands but also empowers the identification of distinct machine-generated sounds, enabling automatic shutdown in response to specific malfunction indicators. This functionality proves invaluable in enhancing operational safety and efficiency within industrial environments.

Moreover, the compact form factor and cost-effectiveness of boards like the Particle Photon 2, coupled with their ability to manage external devices, render them an enticing augmentation for various industries. They offer a gateway to harnessing the potential of ML-powered automation within diverse manufacturing settings.

Contact

In the context of this project, we will leverage the Edge Impulse platform to train a customized ML model. Subsequently, we will deploy this model onto a . The Photon 2 board will be connected to a PDA microphone and a Relay module, creating an integrated system for practical demonstration.

Besides the Photon 2 board, the is required for this project, which includes a W18340-A PDA MEMs microphone by Adafruit. The Edge AI Kit also includes jumper wires, a protoboard, PIR sensor, distance sensor, LED, switches, resistors, vibration sensor, accelerometer, and loudness sensor for many other AI projects. The Relay module is not included, but it is a cheap common device that can be ordered online from many electronic stores.

We will create an account at , then login and create a new project.

Install , then add the extension.

Note: you can also download the main.cpp file from to make sure you have everything entered correctly.

Another option is to use Mac or Linux. For the demo below, the main.cpp modifications were made following the work done by Particle for this example

Install a lightweight Ubuntu, such as

Install

Particle Photon 2 microcontroller board
Edge AI Kit
Edge Impulse
Microsoft Visual Studio Code
Particle Workbench
https://github.com/ronibandini/Photon2VoiceCommand
https://docs.particle.io/firmware/best-practices/firmware-build-options/#using-buildpack
https://docs.docker.com/desktop/install/windows-install/
https://docs.particle.io/getting-started/machine-learning/youre-muted
Lubuntu
VSCode
https://www.instagram.com/ronibandini
https://twitter.com/RoniBandini
https://bandini.medium.com
https://studio.edgeimpulse.com/public/288386/latest
https://github.com/ronibandini/Photon2VoiceCommand