LogoLogo
HomeDocsAPI & SDKsProjectsForumStudio
  • Welcome
    • Featured Machine Learning Projects
      • Getting Started with the Edge Impulse Nvidia TAO Pipeline - Renesas EK-RA8D1
      • Smart City Traffic Analysis - NVIDIA TAO + Jetson Orin Nano
      • ROS 2 Pick and Place System - Arduino Braccio++ Robotic Arm and Luxonis OAK-D
      • Optimize a cloud-based Visual Anomaly Detection Model for Edge Deployments
      • Rooftop Ice Detection with Things Network Visualization - Nvidia Omniverse Replicator
      • Surgery Inventory Object Detection - Synthetic Data - Nvidia Omniverse Replicator
      • NVIDIA Omniverse - Synthetic Data Generation For Edge Impulse Projects
      • Community Guide – Using Edge Impulse with Nvidia DeepStream
      • Computer Vision Object Counting - Avnet RZBoard V2L
      • Gesture Appliances Control with Pose Detection - BrainChip AKD1000
      • Counting for Inspection and Quality Control - Nvidia Jetson Nano (TensorRT)
      • High-resolution, High-speed Object Counting - Nvidia Jetson Nano (TensorRT)
    • Prototype and Concept Projects
      • Renesas CK-RA6M5 Cloud Kit - Getting Started with Machine Learning
      • TI CC1352P Launchpad - Getting Started with Machine Learning
      • OpenMV Cam RT1062 - Getting Started with Machine Learning
      • Getting Started with Edge Impulse Experiments
  • Computer Vision Projects
    • Workplace Organizer - Nvidia Jetson Nano
    • Recyclable Materials Sorter - Nvidia Jetson Nano
    • Analog Meter Reading - Arduino Nicla Vision
    • Creating Synthetic Data with Nvidia Omniverse Replicator
    • SonicSight AR - Sound Classification with Feedback on an Augmented Reality Display
    • Traffic Monitoring - Brainchip Akida
    • Multi-camera Video Stream Inference - Brainchip Akida
    • Industrial Inspection Line - Brainchip Akida
    • X-Ray Classification and Analysis - Brainchip Akida
    • Inventory Stock Tracker - FOMO - BrainChip Akida
    • Container Counting - Arduino Nicla Vision
    • Smart Smoke Alarm - Arduino Nano 33
    • Shield Bot Autonomous Security Robot
    • Cyclist Blind Spot Detection - Himax WE-I Plus
    • IV Drip Fluid-Level Monitoring - Arduino Portenta H7
    • Worker PPE Safety Monitoring - Nvidia Jetson Nano
    • Delivered Package Detection - ESP-EYE
    • Bean Leaf Disease Classification - Sony Spresense
    • Oil Tank Measurement Using Computer Vision - Sony Spresense
    • Object Counting for Smart Industries - Raspberry Pi
    • Smart Cashier with FOMO - Raspberry Pi
    • PCB Defect Detection with Computer Vision - Raspberry Pi
    • Bicycle Counting - Sony Spresense
    • Counting Eggs with Computer Vision - OpenMV Cam H7
    • Elevator Passenger Counting - Arduino Nicla Vision
    • ESD Protection using Computer Vision - Seeed ReComputer
    • Solar Panel Defect Detection - Arduino Portenta H7
    • Label Defect Detection - Raspberry Pi
    • Dials and Knob Monitoring with Computer Vision - Raspberry Pi
    • Digital Character Recognition on Electric Meter System - OpenMV Cam H7
    • Corrosion Detection with Computer Vision - Seeed reTerminal
    • Inventory Management with Computer Vision - Raspberry Pi
    • Monitoring Retail Checkout Lines with Computer Vision - Renesas RZ/V2L
    • Counting Retail Inventory with Computer Vision - Renesas RZ/V2L
    • Pose Detection - Renesas RZ/V2L
    • Product Quality Inspection - Renesas RZ/V2L
    • Smart Grocery Cart Using Computer Vision - OpenMV Cam H7
    • Driver Drowsiness Detection With FOMO - Arduino Nicla Vision
    • Gastroscopic Image Processing - OpenMV Cam H7
    • Pharmaceutical Pill Quality Control and Defect Detection
    • Deter Shoplifting with Computer Vision - Texas Instruments TDA4VM
    • Smart Factory Prototype - Texas Instruments TDA4VM
    • Correct Posture Detection and Enforcement - Texas Instruments TDA4VM
    • Visual Anomaly Detection with FOMO-AD - Texas Instruments TDA4VM
    • Surface Crack Detection and Localization - Texas Instruments TDA4VM
    • Surface Crack Detection - Seeed reTerminal
    • Retail Image Classification - Nvidia Jetson Nano
    • SiLabs xG24 Plus Arducam - Sorting Objects with Computer Vision and Robotics - Part 1
    • SiLabs xG24 Plus Arducam - Sorting Objects with Computer Vision and Robotics - Part 2
    • Object Detection and Visualization - Seeed Grove Vision AI Module
    • Bike Rearview Radar - Raspberry Pi
    • Build a Self-Driving RC Vehicle - Arduino Portenta H7 and Computer Vision
    • "Bring Your Own Model" Image Classifier for Wound Identification
    • Acute Lymphoblastic Leukemia Classifier - Nvidia Jetson Nano
    • Hardhat Detection in Industrial Settings - Alif Ensemble E7
    • Motorcycle Helmet Identification and Traffic Light Control - Texas Instruments AM62A
    • Import a Pretrained Model with "Bring Your Own Model" - Texas Instruments AM62A
    • Product Inspection with Visual Anomaly Detection - FOMO-AD - Sony Spresense
    • Visual Anomaly Detection in Fabric using FOMO-AD - Raspberry Pi 5
    • Car Detection and Tracking System for Toll Plazas - Raspberry Pi AI Kit
    • Visual Anomaly Detection - Seeed Grove Vision AI Module V2
    • Object Counting with FOMO - OpenMV Cam RT1062
    • Visitor Heatmap with FOMO Object Detection - Jetson Orin Nano
    • Vehicle Security Camera - Arduino Portenta H7
  • Audio Projects
    • Occupancy Sensing - SiLabs xG24
    • Smart Appliance Control Using Voice Commands - Nordic Thingy:53
    • Glass Window Break Detection - Nordic Thingy:53
    • Illegal Logging Detection - Nordic Thingy:53
    • Illegal Logging Detection - Syntiant TinyML
    • Wearable Cough Sensor and Monitoring - Arduino Nano 33 BLE Sense
    • Collect Data for Keyword Spotting - Raspberry Pi Pico
    • Voice-Activated LED Strip - Raspberry Pi Pico
    • Snoring Detection on a Smart Phone
    • Gunshot Audio Classification - Arduino Nano 33 + Portenta H7
    • AI-Powered Patient Assistance - Arduino Nano 33 BLE Sense
    • Acoustic Pipe Leakage Detection - Arduino Portenta H7
    • Location Identification using Sound - Syntiant TinyML
    • Environmental Noise Classification - Nordic Thingy:53
    • Running Faucet Detection - Seeed XIAO Sense + Blues Cellular
    • Vandalism Detection via Audio Classification - Arduino Nano 33 BLE Sense
    • Predictive Maintenance Using Audio Classification - Arduino Nano 33 BLE Sense
    • Porting an Audio Project from the SiLabs Thunderboard Sense 2 to xG24
    • Environmental Audio Monitoring Wearable - Syntiant TinyML - Part 1
    • Environmental Audio Monitoring Wearable - Syntiant TinyML - Part 2
    • Keyword Spotting - Nordic Thingy:53
    • Detecting Worker Accidents with Audio Classification - Syntiant TinyML
    • Snoring Detection with Syntiant NDP120 Neural Decision Processor - Arduino Nicla Voice
    • Recognize Voice Commands with the Particle Photon 2
    • Voice Controlled Power Plug with Syntiant NDP120 (Nicla Voice)
    • Determining Compressor State with Audio Classification - Avnet RaSynBoard
    • Developing a Voice-Activated Product with Edge Impulse's Synthetic Data Pipeline
    • Enhancing Worker Safety using Synthetic Audio to Create a Dog Bark Classifier
  • Predictive Maintenance and Defect Detection Projects
    • Predictive Maintenance - Nordic Thingy:91
    • Brushless DC Motor Anomaly Detection
    • Industrial Compressor Predictive Maintenance - Nordic Thingy:53
    • Anticipate Power Outages with Machine Learning - Arduino Nano 33 BLE Sense
    • Faulty Lithium-Ion Cell Identification in Battery Packs - Seeed Wio Terminal
    • Weight Scale Predictive Maintenance - Arduino Nano 33 BLE Sense
    • Fluid Leak Detection With a Flowmeter and AI - Seeed Wio Terminal
    • Pipeline Clog Detection with a Flowmeter and AI - Seeed Wio Terminal
    • Refrigerator Predictive Maintenance - Arduino Nano 33 BLE Sense
    • Motor Pump Predictive Maintenance - Infineon PSoC 6 WiFi-BT Pioneer Kit + CN0549
    • BrickML Demo Project - 3D Printer Anomaly Detection
    • Condition Monitoring - Syntiant TinyML Board
    • Predictive Maintenance - Commercial Printer - Sony Spresense + CommonSense
    • Vibration Classification with BrainChip's Akida
    • AI-driven Audio and Thermal HVAC Monitoring - SeeedStudio XIAO ESP32
  • Accelerometer and Activity Projects
    • Arduino x K-Way - Outdoor Activity Tracker
    • Arduino x K-Way - Gesture Recognition for Hiking
    • Arduino x K-Way - TinyML Fall Detection
    • Posture Detection for Worker Safety - SiLabs Thunderboard Sense 2
    • Hand Gesture Recognition - OpenMV Cam H7
    • Arduin-Row, a TinyML Rowing Machine Coach - Arduino Nicla Sense ME
    • Fall Detection using a Transformer Model – Arduino Giga R1 WiFi
    • Bluetooth Fall Detection - Arduino Nano 33 BLE Sense
    • Monitor Packages During Transit with AI - Arduino Nano 33 BLE Sense
    • Smart Baby Swing - Arduino Portenta H7
    • Warehouse Shipment Monitoring - SiLabs Thunderboard Sense 2
    • Gesture Recognition - Bangle.js Smartwatch
    • Gesture Recognition for Patient Communication - SiLabs Thunderboard Sense 2
    • Hospital Bed Occupancy Detection - Arduino Nano 33 BLE Sense
    • Porting a Posture Detection Project from the SiLabs Thunderboard Sense 2 to xG24
    • Porting a Gesture Recognition Project from the SiLabs Thunderboard Sense 2 to xG24
    • Continuous Gait Monitor (Anomaly Detection) - Nordic Thingy:53
    • Classifying Exercise Activities on a BangleJS Smartwatch
  • Air Quality and Environmental Projects
    • Arduino x K-Way - Environmental Asthma Risk Assessment
    • Gas Detection in the Oil and Gas Industry - Nordic Thingy:91
    • Smart HVAC System with a Sony Spresense
    • Smart HVAC System with an Arduino Nicla Vision
    • Indoor CO2 Level Estimation - Arduino Portenta H7
    • Harmful Gases Detection - Arduino Nano 33 BLE Sense
    • Fire Detection Using Sensor Fusion and TinyML - Arduino Nano 33 BLE Sense
    • AI-Assisted Monitoring of Dairy Manufacturing Conditions - Seeed XIAO ESP32C3
    • AI-Assisted Air Quality Monitoring - DFRobot Firebeetle ESP32
    • Air Quality Monitoring with Sipeed Longan Nano - RISC-V Gigadevice
    • Methane Monitoring in Mines - Silabs xG24 Dev Kit
    • Smart Building Ventilation with Environmental Sensor Fusion
    • Sensor Data Fusion with Spresense and CommonSense
    • Water Pollution Detection - Arduino Nano ESP32 + Ultrasonic Scan
    • Fire Detection Using Sensor Fusion - Arduino Nano 33 BLE Sense
  • Novel Sensor Projects
    • 8x8 ToF Gesture Classification - Arduino RP2040 Connect
    • Food Irradiation Dose Detection - DFRobot Beetle ESP32C3
    • Applying EEG Data to Machine Learning, Part 1
    • Applying EEG Data to Machine Learning, Part 2
    • Applying EEG Data to Machine Learning, Part 3
    • Liquid Classification with TinyML - Seeed Wio Terminal + TDS Sensor
    • AI-Assisted Pipeline Diagnostics and Inspection with mmWave Radar
    • Soil Quality Detection Using AI and LoRaWAN - Seeed Sensecap A1101
    • Smart Diaper Prototype - Arduino Nicla Sense ME
    • DIY Smart Glove with Flex Sensors
    • EdgeML Energy Monitoring - Particle Photon 2
    • Wearable for Monitoring Worker Stress using HR/HRV DSP Block - Arduino Portenta
  • Software Integration Demos
    • Azure Machine Learning with Kubernetes Compute and Edge Impulse
    • ROS2 + Edge Impulse, Part 1: Pub/Sub Node in Python
    • ROS2 + Edge Impulse, Part 2: MicroROS
    • Using Hugging Face Datasets in Edge Impulse
    • Using Hugging Face Image Classification Datasets with Edge Impulse
    • Edge Impulse API Usage Sample Application - Jetson Nano Trainer
    • MLOps with Edge Impulse and Azure IoT Edge
    • A Federated Approach to Train and Deploy Machine Learning Models
    • DIY Model Weight Update for Continuous AI Deployments
    • Automate the CI/CD Pipeline of your Models with Edge Impulse and GitHub Actions
    • Deploying Edge Impulse Models on ZEDEDA Cloud Devices
Powered by GitBook
On this page
  • Introduction
  • Prerequisites
  • Hardware
  • Software
  • Preparation
  • Collect Gesture Samples
  • Gesture Collecting Code
  • Transfer .CSV-files from Bangle.js to Your Computer
  • Split .CSV-files using Python
  • Use Edge Impulse for Machine Learning
  • Log in and create a project
  • Upload sample data
  • Create an impulse
  • Generate features
  • Download the trained model
  • Deployment
  • Test the gestures on Bangle.js!
  • Final Comments

Was this helpful?

Edit on GitHub
Export as PDF
  1. Accelerometer and Activity Projects

Gesture Recognition - Bangle.js Smartwatch

Teach your smartwatch to recognize different movements and motions of your watch hand.

PreviousWarehouse Shipment Monitoring - SiLabs Thunderboard Sense 2NextGesture Recognition for Patient Communication - SiLabs Thunderboard Sense 2

Last updated 1 year ago

Was this helpful?

Created By: Thomas Vikström

Public Project Link:

Introduction

In this tutorial you will learn how to get started with Machine Learning on your Bangle.js smartwatch. Specifically you will build and train a model learning to recognize different movements of your watch hand. The steps include how to collect data, how to use Edge Impulse for the machine learning part, and how to finally upload the learned model back to the watch and utilize it there.

Prerequisites

Hardware

  • [Bangle JS, version 1 or 2](https://shop.espruino.com/banglejs2

    • Theoretically the Bangle Emulator might work as well, but you can’t of course collect real accelerometer or heart rate data with an emulator!

  • Computer with Bluetooth (BLE)

Software

    • used to split a file with samples into separate .CSV-files for importing into Edge Impulse

    • not strictly necessary, but very useful if you want to collect lots of samples

    • Notepad, Notepad++, Excel etc. can also be used to manually split files, not feasible with lots of samples

Preparation

Collect Gesture Samples

This part will guide you how to use your watch to collect multiple samples for one gesture type at a time.

  1. Pair your computer with the watch using Espruino Web IDE

    • the code will create a text file in the watch memory

  2. Name the event you are going to collect samples for by changing the line event="left";

    • use e.g. event="left"; for twitching your watch hand left and later on event="right"; for the opposite direction

    • upload the code to RAM. Do not upload this code to flash or storage, you might in worst case need to reset the watch completely.

  3. Perform the gesture

    • repeat the gesture many times, the more the merrier!

      • wait a second between each

    • the gesture collecting code will append each sample to the .CSV-file

    • a graph will also be shown on your watch screen

  4. Repeat steps 3-4 above, remember to change event="<gesture>"; where <gesture> is the hand movement you will collect

  5. The devil is in the details, do not e.g. remove the seemingly insignificant semi-colon ; !

Gesture Collecting Code

// ******* Gesture collecting code ********
name="Gesture";
event="left";

var fname = 1;

function gotGesture(d) {  
  var f = require("Storage").open(event + "." + fname + ".csv", "a");
  
  print("timestamp, x, y, z");
  f.write("timestamp, x, y, z\n");
  for (var j=0;j<d.length;j+=3) {
       print(j +", ", d[j] + ", " + d[j+1] + ", " + d[j+2] );
       f.write(j + ", " + d[j] + ", " + d[j+1] + ", " + d[j+2] +"\n" );
  }

  g.clear();
  g.setColor(1,1,1);
  var my = g.getHeight()/2;
  var sy = my/128;
  var sx = g.getWidth()/(50*3);
  g.drawLine(0,my,g.getWidth(),my);
  for (var i=0;i<d.length-3;i+=3) {
    for (var c=0;c<3;c++) {
      g.setColor(c==0,c==1,c==2);
      g.drawLine(i*sx, my+d[i+c]*sy, (i+3)*sx, my+d[i+c+3]*sy);
    }
  }
  g.flip(1);
}

Bangle.on('gesture',gotGesture);

Transfer .CSV-files from Bangle.js to Your Computer

This part will guide you how to transfer the .CSV-files from your watch to your computer.

  • In Espruino Web IDE, click the Storage icon (4 discs) in the middle of the screen

  • Search for your file/files, they start with the event name you provided in earlier steps e.g. left.1.csv (StorageFile)

  • Click on Save (the floppy disc icon) for one file at a time and save the files to a folder of your choice, e.g. to c:\temp

Split .CSV-files using Python

This part will guide you how to split the .CSV-files you've downloaded from your watch into separate .CSV-files. The reason for this is that Edge Impulse requires one .CSV-file per sample.

  1. Replace the path on the second line (starting with PATENTS = ...) with the full path and filename for the first file you want to split. I.e. the file you downloaded in previous steps.

  2. Run the code in your Python editor

    • The program will search for the string 'timestamp, x, y, z' in the original file and for each time (= sample) it finds, create a new file.

    • If you don't use Python, you'd need to split the file for each sample using some other method, manual or automatic. Remember that the samples aren't all of the same size so the amount of rows will vary.

    • You should now have several .CSV-files in the folder you chose. The files will be named like left.1.csv (StorageFile)-15.csv where -15 at the end is a running number.

  3. Repeat steps 2-3 above for each file you downloaded from your watch.

import re
PATENTS = 'C:/temp/left.1.csv (StorageFile)'

def split_file(filename):
    # Open file to read
    with open(filename, "r") as r:

        # Counter
        n=0

        # Start reading file line by line
        for i, line in enumerate(r):

            # If line match with template -- <?xml --increase counter n
            if re.match(r'timestamp, x, y, z', line):
                n+=1

                # This "if" can be deleted, without it will start naming from 1
                # or you can keep it. It depends where is "re" will find at
                # first time the template. In my case it was first line
                if i == 0:
                    n = 0               

            # Write lines to file    
            with open("{}-{}.csv".format(PATENTS, n), "a") as f:
                f.write(line)

split_file(PATENTS)

Use Edge Impulse for Machine Learning

Log in and create a project

  • Create a new project and give it a name, why not Bangle.js

  • Select Accelerometer data when asked for the type of data you are dealing with.

  • Click Let's get started

Upload sample data

  • Select Data acquisition from the left hand menu

  • Click on the icon labeled Upload existing data

  • Click on Choose files

    • Navigate to the folder you used to store the .CSV-files (e.g. c:\temp)

    • Select all the sample files that were created earlier, but not the original files you downloaded from your watch. I.e. select only the .CSV-files with a number at the end of the file name, e.g. left.1.csv (StorageFile)-0.csv.

    • You can also upload smaller batches at a time

    • Automatically split between training and testing and Infer from filename should both be selected

  • Click Begin upload - this will now quickly upload the files to your project.

    • The upload process is shown on the right side, if everything goes well, you should at the end see a message like this: Done. Files uploaded successful: 85. Files that failed to upload: 0. Job completed

  • Take a look at a sample by selecting any row

  • Notice that the labels (left and right in this example) were automatically inferred from the filenames you used.

  • Always strive to get a roughly similar amount of samples for each gesture. You can see the balance in the pie graph on the left.

  • Also notice that Edge Impulse split the sample files so that approximately 80 % will be used for training and 20 % for testing purposes.

  • Through the four small icons you can filter your data, select multiple items, upload more data or see a slightly more detailed list view. With the help of these you can e.g. mass delete many files at a time.

Create an impulse

An impulse takes raw data, uses signal processing to extract features, and then uses a learning block to classify new data. These steps will create an impulse.

  • Click Create impulse

  • Change the window size and increase according to the screenshot below.

  • Add the Raw Data processing block

  • Add the Classification (Keras) learning block

  • Click Save Impulse

  • Note that you often need to tweak one or several of the settings, this is depending on what you want to achieve and the quality & quantity of your data.

Generate features

  • Click Raw data from the left hand menu

    • You will see a graph of one of the samples as well as the raw features.

  • In this case you don't need to change anything, so click Save parameters which will take you to the second tab.

  • Click Generate features

    • This processes the samples

    • After a while you will see a graph in the Feature explorer. This gives you a 3D view of how well your data can be clustered into different groups. In an ideal situation all similar samples should be clustered into same group with a clear distinction between groups. If that's not the case, no worries at this point, the neural network algorithm will in many cases still be able to do a very good job!

Train the neural network

Here you will train the neural network and analyse its performance.

  • Click NN Classifier from the left hand menu

  • Change the Number of training cycles to 100. This is another parameter to tweak, the higher this number is, the longer time the training will take, but also the better the network will perform, at least until it can't improve anymore.

  • Click on Start training

  • Within a few minutes, depending on the number of labels and data quantity you have, the training will finish.

  • The graph shows the training performance and accuracy. While 100 % looks like a perfect score, it isn't necessary so. The reason is that the network might perform poorly in real situations when confronted with sample data not seen before.

Download the trained model

Here you will download the trained model to your computer.

  • Click Dashboard from the left hand menu

  • Scroll down to the section Download block output and click on the icon next to NN Classifier model TensorFlow Lite (int8 quantized)

    • The float32 model might sometimes perform slightly better than the int8 model, but it requires more memory and might cause Bangle.js to crash because of this.

  • Save the file to a folder of your choice

Deployment

Transfer the trained model to Bangle.js from your computer

This part will guide you how to transfer the model file from your computer to Bangle.js.

  • In Espruino Web IDE, click the Storage icon (4 discs) in the middle of the screen

  • Click Upload a file

  • Select the model file you downloaded from Edge Impulse

  • Change the filename to .tfmodel and click Ok

  • Create a text file, e.g. with Notepad

    • Write the event names in alphabetical order, separated by commas, e.g. left,right

    • Save the file to a folder of your choice

  • In Espruino Web IDE, click the Storage icon (4 discs) in the middle of the screen

  • Select the file you just created

  • Change the filename to .tfnames and click Ok

Test the gestures on Bangle.js!

Finally you will be able to test how well the trained model performs in real life! Just a few steps left.

  • Paste the below code into the right side in Espruino Web IDE

  • Upload the code to RAM

    • This short program will trigger your watch to sense movements and try to recognise which movement it was.

    • The recognised movement, e.g. left or right, will be shown in the left window in Espruino Web IDE as well as on your watch display.

Bangle.on('aiGesture',(gesture,raw)=>print(gesture,raw));

Bangle.on('aiGesture',(gesture)=>{
  E.showMessage(gesture);
  setTimeout(()=>g.clear(), 1000);
});

Final Comments

First of all, hopefully you with this short tutorial were successful in training and recognizing gesture events from your Bangle.js. Hopefully it also inspires you to try to improve the performance, e.g. by collecting more samples, by collecting more event types or by tweaking the different parameters and settings in Edge Impulse.

Get the watch up and running by following these … and connected by these

Create an Edge Impulse account for free

for information about how to install or use Python, check e.g.

Install the app Gesture Test on your watch from the

Paste the below Gesture collection code into the right side in Espruino Web IDE (adapted from )

Copy the below Python code (shamelessly copied from ) into your favourite Python editor.

In this part you will learn how to upload the sample files you've created earlier, create a machine learning model, train and finally analyse it. This tutorial will only cover the essential steps needed for Bangle.js. To learn more about Edge Impulse, see e.g. and .

Log in to , using the credentials for the free account you created in the beginning.

guidelines
guidelines
here
Python
Python documentation
Bangle App Loader
this code
Stackoverflow
getting started
continuous motion recognition
Edge Impulse
https://studio.edgeimpulse.com/public/77262/latest
Upload sample data
Create impulse
Feature Explorer
Training Performance
Upload File