Prototyping a smart glove for gesture recognition, using Velostat to make DIY flex sensors for HCI.
Created By: Simone Salerno
Demo Video: View on GitHub
HCI (Human Computer Interaction) is an evolving topic these days that is finding its way into the life of many people, with a large application potential in the consumer, health, and assistive technology industries.
Free-form interfaces (for example, voice and gestures) have become ubiquitous and the cost of inclusion is getting lower and lower. One type of interface that may come handy in the bleeding-edge field of AR/VR, or in the health industry for low-mobility people, is one that uses finger and hand movement to interact with a device. Essentially: a smart glove.
A smart glove is one that is able to react to the movements of the fingers, by recognizing either their fixed position or moving patterns. In this project I'm going to build a flex smart glove - one that uses flex sensors on the fingers and an Edge Impulse TinyML model - with a BOM (Bill of Materials) that is less than $5 USD (excluding the microcontroller).
Flex sensors change their resistance based on the amount of "bend" they're subjected to. Commercial flex sensors for the Arduino and embedded ecosystem exist, but they cost about $15 USD each. To cover all the fingers of a single hand it would thus cost 5 x 15, so approximately $75 USD. It may not be a prohibitive cost, but doing it yourself will cut this value down to just about one dollar USD.
Moreover, you can build flex sensors of the size you wish to use in other settings too (you could attach them to your arms or legs, for example).
Let's see how to build one!
There are few materials that can be used to build a flex sensor by yourself. In this project, I selected Velostat because it is widely available and pretty cheap. An 11"x11" sheet costs $5 USD on Adafruit. It is a pressure-sensitive material that can react well to bending too.
Follow this YouTube video for a step by step tutorial: https://www.youtube.com/watch?v=FEPgLbPv6NM
You will then sew the bends to the glove to keep them in place.
Since each finger has its own flex sensor, we'll be dealing with 5 axis of data. Considering that the user can perform both still positions and moving gestures, we need to collect data over time. If we collect N
timesteps, our feature vector will be of size 5 x N.
Our DIY flex sensors, though, are not very precise. Relying on their absolute values alone could lead to bad results because they can vary based on the stretch force applied at rest state (e.g., how we wear the glove). This is why we are augmenting the input features by adding the cross-difference among each pair of fingers. The rationale behind this is to capture the relative position/movement of one finger from the others.
So, calling F1...F5 each finger's reading, a single row in the feature vector will be made of:
(size 15)
The definitive input vector will be a matrix of 15 x N.
The value of N
depends on 2 factors:
Gestures duration: how long (in seconds) is the longest gesture?
Sampling frequency: how fast are you reading the flex sensors?
The faster you sample, the more granularity your data will have, but the larger the input vector (and the Edge Impulse model) will be! A good starting point is taking 100 samples/second. You can increase this value later if the model is not performing well.
How long should the longest gesture be? I recommend to start with short gestures of ~1 second. Once you have good results, you can try to increase the gesture complexity and duration. Also, start with repeating patterns (a.k.a. continous motion), not "one-shot" gestures (they're harder to recognize).
To handle the sensor readings, we're going to create a Hand
class: this will encapsulate the actual reading and cross-difference calculations. We're also adding exponential smoothing to eliminate high frequency fluctuations.
Create a file, I've called it hand.h
, with the following:
The collection sketch is very simple: we only capture data and print it to the Serial Monitor of the Arduino IDE in CSV format. This allows use of the edge-impulse-data-forwarder
command line tool to import the data into Edge Impulse platform.
Here is the Sketch I've created:
Upload the Sketch, start the tool and start performing each of the positions/gestures you want to recognize.
Our data is a multi-axis time series. This usually means the processing block will be made of Spectral Features (FFT frequencies and their power spectrum).
For the Classifier, I opted for a fully connected network with 2 layers. As you can see from the confusion matrix, you can expect to achieve a very high score on all the gestures.
Now move to the Deployment
tab and download the project as an Arduino library. To run the model on our board, we're going to leverage the EloquentArduino library, which makes this a breeze.
After you have imported the downloaded zip as a library in the Arduino IDE, compile and upload the following Sketch:
The Sketch leverages a couple of helper classes from the EloquentArduino library:
ImpulseBuffer
allows you to push values to a circular queue that will discard older values when new ones arrive
Quorum
allows you to batch the latest N
predictions and check that at least N/2 + 1 of them agree on the same label. This helps smooth out prediction noise.
Open the Serial Monitor and start moving your hand replicating the gestures you recorded.
Using this method, we successfully built a hand gesture recognition machine learning model out of Velostat pressure-sensitive conductive sheets. Dedicated and use-specific flex sensors do exist, but they are more costly. The flex sensor is able to identify and predict finger position relative to the other fingers, and classify accordingly. This creates the opportunity to recognize gestures and use them as an interface to devices.