Processing of EEG and IMU sensor time data for intention recognition

sensor fusion machine learning signal processing EEG IMU BCI Python sk-learn
Picture of someone playing the game with EEG and IMUs. Picture of someone playing the game with EEG and IMUs.

The Quick Take

Electroencephalography (EEG) is commonly used in non-invasive brain-computer interfaces (BCIs), which allow users to control devices through brain activity. However, BCIs require frequent and long training sessions for effective use - making it difficult to stay focused and collect good quality data. I explored the possibility of involving Inertial Movement Units (IMUs), which are sensors that detect motion, into these training sessions to enable the design of more engaging and interactive training methods. In addition, I explored whether fusing IMU and EEG signals could produce a more reliable output signal.

Introduction

Brain-computer interfaces (BCIs) are tools that allow users to interact with a computer through brain activity, rather than by physical movement. This is particularly useful for people with paralysis or other conditions that impair their ability to interact with technology or the world around them. The most popular device used for BCIs is an electroencephalography (EEG) device due to its high temporal precision, high availability, and relatively low cost. The EEG signals are typically processed using machine learning or deep learning to translate brain activity into actions - such as moving a cursor.

As it is difficult to pick up on your thought of wanting to perform an action, there are numerous mental tasks a user can do to control the BCI. One popular approach is to imagine moving your hands (motor imagery); the same areas that activate during hand movement also activate during imagined movement. The stronger the imagination, the stronger the activation.

The goal of this project was to explore whether using Inertial Movement Units (IMUs), which provide reliable motion data, and slight hand movement (motor execution) would make it possible to collect data and make training sessions more engaging by using the IMU signals to drive interactive tasks, such as simple games.

Execution

I began by conducting an experiment to determine whether left, right, and no movement could be distinguished using a cue-based paradigm without IMUs. This confirmed that the movements could be distinguished and provided a baseline for comparing results with the IMU-enhanced setup.

Once I confirmed that it was possible to distinguish between the movements, I developed a small game. In this game, fruit falls in different lanes and the user has to move the avatar to the correct lane to "catch" the fruit. This served as a proof-of-concept for how training sessions with IMUs could be structured, and was also used for further experiments with the IMU to guide the user in performing movements that were well-spaced and fairly distributed across categories. So instead of labeling the data by the cue, labels were now based on IMU-detected movement.

With some time remaining before the end of the project, I prototyped an inverse-variance weighted fusion algorithm. The idea behind this is that combining data from multiple sensors can produce a more robust signal. This could also be more rewarding for the user as they could see the EEG-based control gradually improve over time. However, I was not successful in improving the pipeline using this method in the remaining time.

Results

I set up a paradigm where IMUs could be used for auto-labeling EEG data. This also demonstrated how the proof-of-concept game could make training sessions more engaging. Moreover, the IMU-enhanced paradigm showed higher classification accuracy than the cue-based paradigm. At the same time, the noise was also higher when using the IMUs (which was expected as movement artifacts are a common type of noise in EEG) and it cannot be excluded that this affected the results. Finally, I briefly explored sensor fusion, as I believe this would be a good addition to the paradigm, but I was unable to get it working within the remaining project time.