LogoLogo
HomeDocsAPI & SDKsProjectsForumStudio
  • Welcome
    • Featured Machine Learning Projects
      • Getting Started with the Edge Impulse Nvidia TAO Pipeline - Renesas EK-RA8D1
      • Smart City Traffic Analysis - NVIDIA TAO + Jetson Orin Nano
      • ROS 2 Pick and Place System - Arduino Braccio++ Robotic Arm and Luxonis OAK-D
      • Optimize a cloud-based Visual Anomaly Detection Model for Edge Deployments
      • Rooftop Ice Detection with Things Network Visualization - Nvidia Omniverse Replicator
      • Surgery Inventory Object Detection - Synthetic Data - Nvidia Omniverse Replicator
      • NVIDIA Omniverse - Synthetic Data Generation For Edge Impulse Projects
      • Community Guide – Using Edge Impulse with Nvidia DeepStream
      • Computer Vision Object Counting - Avnet RZBoard V2L
      • Gesture Appliances Control with Pose Detection - BrainChip AKD1000
      • Counting for Inspection and Quality Control - Nvidia Jetson Nano (TensorRT)
      • High-resolution, High-speed Object Counting - Nvidia Jetson Nano (TensorRT)
    • Prototype and Concept Projects
      • Renesas CK-RA6M5 Cloud Kit - Getting Started with Machine Learning
      • TI CC1352P Launchpad - Getting Started with Machine Learning
      • OpenMV Cam RT1062 - Getting Started with Machine Learning
      • Getting Started with Edge Impulse Experiments
  • Computer Vision Projects
    • Workplace Organizer - Nvidia Jetson Nano
    • Recyclable Materials Sorter - Nvidia Jetson Nano
    • Analog Meter Reading - Arduino Nicla Vision
    • Creating Synthetic Data with Nvidia Omniverse Replicator
    • SonicSight AR - Sound Classification with Feedback on an Augmented Reality Display
    • Traffic Monitoring - Brainchip Akida
    • Multi-camera Video Stream Inference - Brainchip Akida
    • Industrial Inspection Line - Brainchip Akida
    • X-Ray Classification and Analysis - Brainchip Akida
    • Inventory Stock Tracker - FOMO - BrainChip Akida
    • Container Counting - Arduino Nicla Vision
    • Smart Smoke Alarm - Arduino Nano 33
    • Shield Bot Autonomous Security Robot
    • Cyclist Blind Spot Detection - Himax WE-I Plus
    • IV Drip Fluid-Level Monitoring - Arduino Portenta H7
    • Worker PPE Safety Monitoring - Nvidia Jetson Nano
    • Delivered Package Detection - ESP-EYE
    • Bean Leaf Disease Classification - Sony Spresense
    • Oil Tank Measurement Using Computer Vision - Sony Spresense
    • Object Counting for Smart Industries - Raspberry Pi
    • Smart Cashier with FOMO - Raspberry Pi
    • PCB Defect Detection with Computer Vision - Raspberry Pi
    • Bicycle Counting - Sony Spresense
    • Counting Eggs with Computer Vision - OpenMV Cam H7
    • Elevator Passenger Counting - Arduino Nicla Vision
    • ESD Protection using Computer Vision - Seeed ReComputer
    • Solar Panel Defect Detection - Arduino Portenta H7
    • Label Defect Detection - Raspberry Pi
    • Dials and Knob Monitoring with Computer Vision - Raspberry Pi
    • Digital Character Recognition on Electric Meter System - OpenMV Cam H7
    • Corrosion Detection with Computer Vision - Seeed reTerminal
    • Inventory Management with Computer Vision - Raspberry Pi
    • Monitoring Retail Checkout Lines with Computer Vision - Renesas RZ/V2L
    • Counting Retail Inventory with Computer Vision - Renesas RZ/V2L
    • Pose Detection - Renesas RZ/V2L
    • Product Quality Inspection - Renesas RZ/V2L
    • Smart Grocery Cart Using Computer Vision - OpenMV Cam H7
    • Driver Drowsiness Detection With FOMO - Arduino Nicla Vision
    • Gastroscopic Image Processing - OpenMV Cam H7
    • Pharmaceutical Pill Quality Control and Defect Detection
    • Deter Shoplifting with Computer Vision - Texas Instruments TDA4VM
    • Smart Factory Prototype - Texas Instruments TDA4VM
    • Correct Posture Detection and Enforcement - Texas Instruments TDA4VM
    • Visual Anomaly Detection with FOMO-AD - Texas Instruments TDA4VM
    • Surface Crack Detection and Localization - Texas Instruments TDA4VM
    • Surface Crack Detection - Seeed reTerminal
    • Retail Image Classification - Nvidia Jetson Nano
    • SiLabs xG24 Plus Arducam - Sorting Objects with Computer Vision and Robotics - Part 1
    • SiLabs xG24 Plus Arducam - Sorting Objects with Computer Vision and Robotics - Part 2
    • Object Detection and Visualization - Seeed Grove Vision AI Module
    • Bike Rearview Radar - Raspberry Pi
    • Build a Self-Driving RC Vehicle - Arduino Portenta H7 and Computer Vision
    • "Bring Your Own Model" Image Classifier for Wound Identification
    • Acute Lymphoblastic Leukemia Classifier - Nvidia Jetson Nano
    • Hardhat Detection in Industrial Settings - Alif Ensemble E7
    • Motorcycle Helmet Identification and Traffic Light Control - Texas Instruments AM62A
    • Import a Pretrained Model with "Bring Your Own Model" - Texas Instruments AM62A
    • Product Inspection with Visual Anomaly Detection - FOMO-AD - Sony Spresense
    • Visual Anomaly Detection in Fabric using FOMO-AD - Raspberry Pi 5
    • Car Detection and Tracking System for Toll Plazas - Raspberry Pi AI Kit
    • Visual Anomaly Detection - Seeed Grove Vision AI Module V2
    • Object Counting with FOMO - OpenMV Cam RT1062
    • Visitor Heatmap with FOMO Object Detection - Jetson Orin Nano
    • Vehicle Security Camera - Arduino Portenta H7
  • Audio Projects
    • Occupancy Sensing - SiLabs xG24
    • Smart Appliance Control Using Voice Commands - Nordic Thingy:53
    • Glass Window Break Detection - Nordic Thingy:53
    • Illegal Logging Detection - Nordic Thingy:53
    • Illegal Logging Detection - Syntiant TinyML
    • Wearable Cough Sensor and Monitoring - Arduino Nano 33 BLE Sense
    • Collect Data for Keyword Spotting - Raspberry Pi Pico
    • Voice-Activated LED Strip - Raspberry Pi Pico
    • Snoring Detection on a Smart Phone
    • Gunshot Audio Classification - Arduino Nano 33 + Portenta H7
    • AI-Powered Patient Assistance - Arduino Nano 33 BLE Sense
    • Acoustic Pipe Leakage Detection - Arduino Portenta H7
    • Location Identification using Sound - Syntiant TinyML
    • Environmental Noise Classification - Nordic Thingy:53
    • Running Faucet Detection - Seeed XIAO Sense + Blues Cellular
    • Vandalism Detection via Audio Classification - Arduino Nano 33 BLE Sense
    • Predictive Maintenance Using Audio Classification - Arduino Nano 33 BLE Sense
    • Porting an Audio Project from the SiLabs Thunderboard Sense 2 to xG24
    • Environmental Audio Monitoring Wearable - Syntiant TinyML - Part 1
    • Environmental Audio Monitoring Wearable - Syntiant TinyML - Part 2
    • Keyword Spotting - Nordic Thingy:53
    • Detecting Worker Accidents with Audio Classification - Syntiant TinyML
    • Snoring Detection with Syntiant NDP120 Neural Decision Processor - Arduino Nicla Voice
    • Recognize Voice Commands with the Particle Photon 2
    • Voice Controlled Power Plug with Syntiant NDP120 (Nicla Voice)
    • Determining Compressor State with Audio Classification - Avnet RaSynBoard
    • Developing a Voice-Activated Product with Edge Impulse's Synthetic Data Pipeline
    • Enhancing Worker Safety using Synthetic Audio to Create a Dog Bark Classifier
  • Predictive Maintenance and Defect Detection Projects
    • Predictive Maintenance - Nordic Thingy:91
    • Brushless DC Motor Anomaly Detection
    • Industrial Compressor Predictive Maintenance - Nordic Thingy:53
    • Anticipate Power Outages with Machine Learning - Arduino Nano 33 BLE Sense
    • Faulty Lithium-Ion Cell Identification in Battery Packs - Seeed Wio Terminal
    • Weight Scale Predictive Maintenance - Arduino Nano 33 BLE Sense
    • Fluid Leak Detection With a Flowmeter and AI - Seeed Wio Terminal
    • Pipeline Clog Detection with a Flowmeter and AI - Seeed Wio Terminal
    • Refrigerator Predictive Maintenance - Arduino Nano 33 BLE Sense
    • Motor Pump Predictive Maintenance - Infineon PSoC 6 WiFi-BT Pioneer Kit + CN0549
    • BrickML Demo Project - 3D Printer Anomaly Detection
    • Condition Monitoring - Syntiant TinyML Board
    • Predictive Maintenance - Commercial Printer - Sony Spresense + CommonSense
    • Vibration Classification with BrainChip's Akida
    • AI-driven Audio and Thermal HVAC Monitoring - SeeedStudio XIAO ESP32
  • Accelerometer and Activity Projects
    • Arduino x K-Way - Outdoor Activity Tracker
    • Arduino x K-Way - Gesture Recognition for Hiking
    • Arduino x K-Way - TinyML Fall Detection
    • Posture Detection for Worker Safety - SiLabs Thunderboard Sense 2
    • Hand Gesture Recognition - OpenMV Cam H7
    • Arduin-Row, a TinyML Rowing Machine Coach - Arduino Nicla Sense ME
    • Fall Detection using a Transformer Model – Arduino Giga R1 WiFi
    • Bluetooth Fall Detection - Arduino Nano 33 BLE Sense
    • Monitor Packages During Transit with AI - Arduino Nano 33 BLE Sense
    • Smart Baby Swing - Arduino Portenta H7
    • Warehouse Shipment Monitoring - SiLabs Thunderboard Sense 2
    • Gesture Recognition - Bangle.js Smartwatch
    • Gesture Recognition for Patient Communication - SiLabs Thunderboard Sense 2
    • Hospital Bed Occupancy Detection - Arduino Nano 33 BLE Sense
    • Porting a Posture Detection Project from the SiLabs Thunderboard Sense 2 to xG24
    • Porting a Gesture Recognition Project from the SiLabs Thunderboard Sense 2 to xG24
    • Continuous Gait Monitor (Anomaly Detection) - Nordic Thingy:53
    • Classifying Exercise Activities on a BangleJS Smartwatch
  • Air Quality and Environmental Projects
    • Arduino x K-Way - Environmental Asthma Risk Assessment
    • Gas Detection in the Oil and Gas Industry - Nordic Thingy:91
    • Smart HVAC System with a Sony Spresense
    • Smart HVAC System with an Arduino Nicla Vision
    • Indoor CO2 Level Estimation - Arduino Portenta H7
    • Harmful Gases Detection - Arduino Nano 33 BLE Sense
    • Fire Detection Using Sensor Fusion and TinyML - Arduino Nano 33 BLE Sense
    • AI-Assisted Monitoring of Dairy Manufacturing Conditions - Seeed XIAO ESP32C3
    • AI-Assisted Air Quality Monitoring - DFRobot Firebeetle ESP32
    • Air Quality Monitoring with Sipeed Longan Nano - RISC-V Gigadevice
    • Methane Monitoring in Mines - Silabs xG24 Dev Kit
    • Smart Building Ventilation with Environmental Sensor Fusion
    • Sensor Data Fusion with Spresense and CommonSense
    • Water Pollution Detection - Arduino Nano ESP32 + Ultrasonic Scan
    • Fire Detection Using Sensor Fusion - Arduino Nano 33 BLE Sense
  • Novel Sensor Projects
    • 8x8 ToF Gesture Classification - Arduino RP2040 Connect
    • Food Irradiation Dose Detection - DFRobot Beetle ESP32C3
    • Applying EEG Data to Machine Learning, Part 1
    • Applying EEG Data to Machine Learning, Part 2
    • Applying EEG Data to Machine Learning, Part 3
    • Liquid Classification with TinyML - Seeed Wio Terminal + TDS Sensor
    • AI-Assisted Pipeline Diagnostics and Inspection with mmWave Radar
    • Soil Quality Detection Using AI and LoRaWAN - Seeed Sensecap A1101
    • Smart Diaper Prototype - Arduino Nicla Sense ME
    • DIY Smart Glove with Flex Sensors
    • EdgeML Energy Monitoring - Particle Photon 2
    • Wearable for Monitoring Worker Stress using HR/HRV DSP Block - Arduino Portenta
  • Software Integration Demos
    • Azure Machine Learning with Kubernetes Compute and Edge Impulse
    • ROS2 + Edge Impulse, Part 1: Pub/Sub Node in Python
    • ROS2 + Edge Impulse, Part 2: MicroROS
    • Using Hugging Face Datasets in Edge Impulse
    • Using Hugging Face Image Classification Datasets with Edge Impulse
    • Edge Impulse API Usage Sample Application - Jetson Nano Trainer
    • MLOps with Edge Impulse and Azure IoT Edge
    • A Federated Approach to Train and Deploy Machine Learning Models
    • DIY Model Weight Update for Continuous AI Deployments
    • Automate the CI/CD Pipeline of your Models with Edge Impulse and GitHub Actions
    • Deploying Edge Impulse Models on ZEDEDA Cloud Devices
Powered by GitBook
On this page
  • Project Demo
  • GitHub Repo
  • Objective
  • Introduction
  • What is EEG?
  • Prerequisites
  • Data flow
  • Preparations
  • Part 1 - play Pong by blinking, no ML involved
  • How does it work?
  • Installation
  • Game play instructions
  • Part 2 - play Pong by blinking, using ML
  • How does it work?
  • Process flow
  • Installation
  • Detailed instructions
  • Game play instructions, common for both Part 1 and Part 2
  • Final Comments

Was this helpful?

Edit on GitHub
Export as PDF
  1. Novel Sensor Projects

Applying EEG Data to Machine Learning, Part 1

Use a consumer-grade EEG device to control and manipulate a simple video game by using TinyML.

PreviousFood Irradiation Dose Detection - DFRobot Beetle ESP32C3NextApplying EEG Data to Machine Learning, Part 2

Last updated 1 year ago

Was this helpful?

Created By: Thomas Vikstrom

Public Project Link:

Project Demo

GitHub Repo

Objective

The objective of this tutorial is to show how you, by using a Muse EEG-device, can control a simple Pong game just by blinking your eyes.

Introduction

Once you understand the benefits and limitations of using EEG-data from a consumer-based device, you can yourself step up and try to control external devices like robots by using eye-blinks or perhaps even by thinking!


What is EEG?

Muse EEG-devices are focused on consumers and they have four EEG-electrodes, two at the forehead, two behind the ears. In addition they also have an accelerometer/gyroscope, and newer models include a PPG-sensor which measures blood flow, breathing rhythm, and heart rate. In this tutorial however are only signals from EEG-electrodes being used.


Prerequisites

To be able to reproduce examples found in this tutorial, you'll need:

  • iPhone or Android phone

  • A computer able to run Python + WiFi

    • only PC/Win10 tested although Mac and Linux computers are expected to work

  • Python 3.x


Data flow

The data flow for both Part 1 and Part 2 is: Your brain → Muse → Bluetooth → Phone/Mind Monitor → WiFi → Computer


Preparations

Python modules

  • Install Python-OSC and Tkinter from a command prompt with pip install python-osc and pip install tk

  • For Part 2 you'll also need Tensorflow, install it with pip install tensorflow

Mind Monitor settings

  • OSC Stream Target IP: here you should add your computer's local IP-address, on Win10 you can run ipconfig in a command prompt to find it, often starts with 192.168.x.x

  • OSC Stream Port: set to 5000

Computer

  • You might need to allow the computer's firewall to allow traffic through port 5000.


Part 1 - play Pong by blinking, no ML involved

How does it work?

As mentioned earlier, this version is not using machine learning at all. Instead it is relying on built-in functionality in the Muse EEG-devices that can detect eye blinks and jaw clenches. These events produce distinct EEG-signals which can also be seen in the Mind Monitor graphs. The Pong game is then scanning for the events and reacting on them.

Installation

  • Run the game from your favourite IDE or from the command prompt with python "Blink Pong without ML.py"

Game play instructions


Part 2 - play Pong by blinking, using ML

How does it work?

In short, you will here need to collect EEG-data (blinks and non-blinks) from your Muse device, and train a ML model in Edge Impulse. The trained model will then be used in the Pong game which otherwise functions as in Part 1.

Process flow

  1. Collect EEG-data for the events you want to classify, one event being e.g. eye blinks and another background brain "noise"

  2. Create a project in Edge Impulse and upload the EEG-data to it

  3. Create, train, and test a ML-model in EI

  4. Download the trained Tensorflow ML-model to your computer

  5. Plug the model into your game and test it

  6. Rinse and repeat from 1 as you'll probably need more data.

Installation

  • Download the following Python programs into a folder you remember

Detailed instructions

In this chapter you will get detailed instructions from start to end how to collect data, train a model, and test it in practice. While not necessarily every click and detail is listed, you should with normal computer proficiency be able to follow and replicate the steps.

0. Connect Muse and start streaming

  • Connect the Muse EEG-device to your phone

  • Wait until the horseshoe in MindMonitor has disappeared and the graph lines for all sensors have calmed down like in the picture. You might need to wait a few minutes to acquire good signals, but it's possible to speed up the process a bit by moistening the sensors with e.g. a wet finger.

  • Start streaming from Mind Monitor by clicking on the button showed in the picture

1. Collect EEG-data

In this step you will, using a Python-program, collect data from your Muse EEG-device.

  • Run Collect OSC-data.py from your favourite IDE or from the command prompt with python "Collect OSC-data.py"

    • Default events being recorded are 1 and Noise, both for 2 seconds. If you want, you can later change them or add more events in the code, look for this:

      rec_dict = {
      "1" : 2,
      "Noise" : 2
      }
  • To start recording events, click on #1 in MindMonitor (see picture above).

  • You will in the terminal window see number 1 for 2 seconds. During this time you should blink once.

  • Next time you'll see Noise for 2 seconds. During this time you should not blink, just relax.

  • The program will record each event in a separate CSV-file. So if you've blinked 100 times and created brain noise 100 times, you'll end up with 200 files of 2 seconds each.

  • It is necessarily not easy to concentrate for a long time, so you are recommended to take a break every now and then. Based on experience, it is also good to remove the EEG-device when not recording and, if you have a longer break, turn it off to save battery. Additionally, next time you use your device it will inevitable be in a slightly different place on your head, and as a result you will probably get a more robust ML-model when recording data.

2. Create a project and upload EEG-data to Edge Impulse

    • Create a project

    • Select Data acquisition and click the icon labeled Upload existing data

    • Use default settings

    • Upload the files you've recorded in the previous step.

3. Create a model, train, and test it

Create a model

  • Click Create an impulse and fill in the Time series data as shown in the picture. While the length of the samples are in fact 2000 ms (= 2 seconds), I've found that using 20 ms (as in 20 lines for each sample) works at least as good.

  • Add the processing block Raw data and let all axes be checkmarked. You can later try to find which axes do not impact much or at all for your model and uncheck them, but then you also need to modify the line expected_samples = 20 in Blink Pong with ML.py accordingly. This is explained more detailed in the code itself.

  • Add the learning block Classification (Keras), in this tutorial you will have only 2 output features: 1 and Noise, but if you want to create an event for double blinks, feel free to record events with e.g. 2 as well, like in the picture.

  • Click Save impulse and Raw data on the left hand menu

    • You will see a graph of one of the samples as well as the raw features.

  • In this case you don't need to change anything, so click Save parameters which will take you to the second tab.

  • Click Generate features

    • This processes the samples

    • After a while you will see a graph in the Feature explorer. This gives you a view of how well your data can be clustered into different groups. In an ideal situation all similar samples would be clustered into same group with a clear distinction between groups. If that's not the case, no worries at this point, the neural network algorithm will in many cases still be able to do a very good job!

Train the neural network

Here you will train the neural network and analyse its performance.

  • Click NN Classifier from the left hand menu

  • Change the Number of training cycles to 200. This is another parameter to tweak, the higher this number is, the longer time the training will take, but also the better the network will perform, at least until it can't improve anymore.

  • Click on Start training

    • Within a few minutes, depending on the number of labels and data quantity you have, the training will finish.

  • The graph shows the training performance and accuracy. While 100 % looks like a perfect score, it isn't necessary so. The reason is that the network might perform poorly in real situations when confronted with sample data not seen before.

Test the model in Edge Impulse

In this step you will see how well the model performs with data it has not seen before. For this purpose Edge Impulse put away approx. 20 % of the training data when you uploaded it.

  • Click on Model testing in the menu

  • Click on Classify all

    • This will run the test samples through the trained model

  • After just a short while, depending on the amount of test samples and model complexity, you will get a performance report. Unless you have lots of data or a perfect model, the performance is seldom 100 %. Depending on your use case and what performance you require, you might need to go back a few steps by collecting more and different data, or by tweaking the parameters, to reach your minimum expectations.

4. Download the trained Tensorflow ML-model to your computer

Here you will download the trained model to your computer.

  • Click Dashboard from the left hand menu

  • Scroll down to the section Download block output and click on the icon next to NN Classifier model TensorFlow Lite (float32)

  • Save the file to a folder of your choice (and which you remember as you'll need it soon!)

5. Plug the model into your game and test it

Here you will deploy your model and test it out in the wild real world!

Deploy your model

  • Copy or move the file from the previous step to the folder where you put your Python programs.

  • Open Blink Pong with ML.py with your favourite IDE or a text file editor like Notepad

    • Scroll to the first function initiate_tf and locate the line with lite_file = "ei-.......lite"

    • Replace the file name after = with your own file name, remember the double quotes " " and .lite at the end

Test!

This can be the most rewarding - or most frustrating - phase in the whole process as you'll find out how well you can control the game with your blinks.

  • Run the game from your favourite IDE or from the command prompt with python "Blink Pong with ML.py"

  • Please note the following:

    • The model explained in this tutorial is based on 2 second long samples. This also means that the Pong game will collect EEG-data for 2 seconds before trying to classify it. When playing the Pong game, the ball might have travelled too far before your blinks have moved the paddle to the desired place.

      • The function pong towards the end of the program includes a variable ball_speed = 5 where you can change the ball speed.

      • By unchecking some of the axes in the Create an impulse step, you will also reduce the data needing processing and the time it takes. As earlier mentioned, and as also explained in the code itself, you then need to change the variable expected_samples from 20 to something else. If you e.g. reduce the axes from 20 to 10, you would put 10 in this variable.


Game play instructions, common for both Part 1 and Part 2

  • Connect the Muse EEG-device to your phone

  • Start streaming from Mind Monitor by clicking on the button shown in the picture

  • The objective of the game is to prevent the ball hitting the floor by moving the paddle.

  • You control the paddle by blinking

    • Blink once to move the paddle in one direction

    • Next blink will stop the paddle

    • Next blink will move the paddle in the other direction

  • The score increases when you manage to hit the ball and decreases when you fail.

  • An intermittent message is shown whenever you blink or clench your jaw. Note that jaw clenches are only available in Part 1, and they are not linked to any action in the game, that is left up to you to implement!


Final Comments

That's it! Hopefully you were successful in training and recognising eye blinks with your EEG-device. Hopefully it also inspires you to try to improve the performance, e.g. by collecting more samples, by collecting more event types or by tweaking the different parameters and settings in Edge Impulse. And finally, when you have understood the possibilities and limitations with a consumer-based EEG-device, why not challenge yourself with something more advanced than the Pong-game!

All images are either the author's own or from Wikimedia Commons

As a gentle introduction to the concept of human and machine communication, the first part shows how to use Muse's built-in blink detection functionality and the second part shows how you can use Machine Learning (ML) with Edge Impulse (EI) to be able to expand the game. And while this is just playing a simple game by blinking, there is a lot happening in the area of connecting brain and machine. Research in BCI-technology (Brain Computer Interface) has enabled tasks earlier believed impossible to become reality. One example of this is when patients suffering from ALS had a into a blood vessel in their brains, and after some training were able to communicate e.g. through WhatsApp messages.

Electroencephalography, or EEG, is a method to record the electrical activity generated by the brain via electrodes placed on the scalp surface. More detailed information is found e.g. from or from .

Professional or clinical EEG-devices are typically equipped with between 16 to 64 high-quality electrodes, and the cost is ~800 - 3000+ USD/EUR. They are mostly intended for research and clinical usage. Consumer-based devices on the other hand have only a few electrodes, but are much more affordable and are often also easier to use. Their main focus is on meditation and relaxation, but can with some limitations also be used for research (more about this in my ).

A Muse EEG-device, any model should work, although Muse-01 (from 2014) streams with 220 Hz instead of 256 Hz and might require a few code changes if you are collecting raw data. They cost around 250 USD/EUR and are manufactured by the Canadian company

for iPhone or Android, one-time cost is ~15 USD/EUR

The (MIT-license) might later include other EEG- and ML-related programs as well.

In this first part you will learn how to control a Pong game just by blinking your eyes. A short video of the Pong game is available .

Download the Python

See

While this is not as complex as brain surgery (), it is still a bit more involved than Part 1. The main steps are listed here and will later on be explained in detail.

which you will use for collecting data

which is the game itself

Head over to . If you are new to EI, you are recommended to take a look at their great instructions.

In this section you will first create a ML-model, then train it, and finally test it. This can be most rewarding in case you get excellent performance the first time, but if not, then you need to find out how to improve the model. Hint: ML eats data for breakfast, lunch, and dinner

The file will get the name ei-[ your project name ]-nn-classifier-tensorflow-lite-float32-model.lite. Although you can rename it if you really want to, why not save your brain cells to more important stuff

To play, see

In the you'll find a .lite-file trained by the author. You can try the game using this, without the need to record own EEG-data, but don't be surprised if it doesn't give good results, brains tend to be different...

😏
😃
😏
https://github.com/baljo/Muse-EEG
brain implant inserted
(Farnsworth, 2021)
Wikipedia
Master's thesis
Interaxon Inc.
Mind Monitor app
code repository
here
code
Game play instructions, common for both Part 1 and Part 2
Collect OSC-data.py
Blink Pong with ML
Edge Impulse
Getting started
Game play instructions, common for both Part 1 and Part 2
Github repo
https://studio.edgeimpulse.com/public/136580/latest