EEG-device ==> Mobile Phone (MindMonitor/PythonOSC) ==> Wi-Fi ==> Computer
, and while it worked well enough, I discovered that using the same concept when introducing yet an additional device (appending ==> Wi-Fi ==> Mobile Robot
in the equation) caused more latency. In practice this resulted in undesirable delays between the desired action and the action performed. For example, when trying to turn the robot left, the left turn sometimes happened unacceptably late, and it was difficult to understand if it was a misinterpretation of the EEG-data, or something else.
Due to this behavior, and that I wanted to simplify the setup, I explored if it was possible to get rid of the phone in the equation, thus having this setup EEG-device ==> Computer ==> Wi-Fi ==> Mobile Robot
. The phone though used MindMonitor and PythonOSC to communicate with the computer, but also automatically reduced the raw data to spectral bands, so I had to find a way to replace both the technical communication as well as the spectral functionality. The communication challenge got solved by using the Lab Streaming Layer (LSL) protocol, and the spectral challenge by Edge Impulse helping me to use their Python-code for extracting spectral features. Through this I was successful in removing the phone and getting almost no extra delay at all!
The hardware used in this project was a Parallax ActivityBot, equipped with XBee Wi-Fi and a Parallax Ping))) Ultrasonic distance sensor. While more or less any Wi-Fi equipped robot - mobile or not - can be used, I’ve found the Parallax product line to be very reliable and easy to work with. The microcontroller on the robot is a Propeller P1 processor with 8 separate cores and a shared RAM-memory, which is more than enough for this quite simple use case.
muselsl record --duration 60
left.<original file name>.csv
for the ‘shallow’ blinks, right.<original file name>.csv
for the ‘deep’ blinks, and background.<original file name>.csv
for the background class without blinks.Save parameters
and in next screen Generate features
.Model testing
functionality in Edge Impulse. For this purpose Edge Impulse puts by default approximately 20% of the data aside. Just click on Classify all
to start the testing.
In this project the test results were quite good with an accuracy of 88%, so I decided this was good enough to start testing the model in practice. If this were a project to be deployed to end users, the accuracy would probably need to be much higher.
Dashboard
and download the TensorFlow Lite (float32)
model. This model file should be copied to same directory as where the Python program is.