EloquentArduino library installation
vl53l5cx not found
message, first thing you should check is the wiring.
Otherwise, if everything is working fine, it’s time to start creating our dataset for model training and testing.
We begin from Project A: Fixed Length Gestures.
vllcx.printTo(Serial)
line prints the sensor data to the Serial port in CSV format, so we can use the edge-impulse-data-forwarder
tool to load our data directly into the Edge Impulse Studio.
Load the sketch and start edge-impulse-data-forwarder
: now choose a gesture that you can perform in about 0.5 - 1 seconds and start iterating in front of the sensor.
For optimal accuracy, you should repeat each gesture at least 50 times. It will be even better if different people perform it, so as to capture more intra-gesture variability. (For the sake of this project, though, 30 repetitions of each should suffice)
After you finish collecting data, you can move on the Impulse design.
Impulse design
circular buffer
data structure that we can use to replicate the Edge Impulse windowing function without hassle.
Once we fill the buffer, we can feed it as input to the Edge Impulse network and get the predictions back.
FloatCircularBuffer
is a data structure that holds an array where you can push new values. When the buffer is full, it shifts the old elements to make room for the new ones. This way, you an an “infinite” buffer that mimics the windowing scheme of Edge Impulse.
By allocating a buffer of EI_CLASSIFIER_DSP_INPUT_FRAME_SIZE
items, you are always sure that the impulse model will get the exact number of features it needs to perform inference.
That completes the Fixed-Length Gesture Project. You can have a look at a video of this project, here:
[[@todo add demo video here]]
The next project follows the same guidelines of this one, but implements a few changes that allow you to perform gesture inference on continuous data, instead of discrete as this one.
One main change is the introduction of a voting mechanism to make more robust predictions in sequence.
edge-impulse-data-forwarder
tool and collect your own dataset.
In this case I suggest you collect at least 60 seconds of continuous motion for each gesture to get good results.
AAAA B AAA C AAAA
.
Our main goal is to eliminate those spot, isolated predictions.
A naive but effective strategy is to use a running voting scheme: every time a new prediction is made, we check the last few ones. If the latest agrees with the others, we can be more confident that it is accurate (this only applies in this case of continuous motion!).
The EloquentArduino library has such a voting scheme.