University of Twente Student Theses

Login

Music-Emotion : towards automated real-time recognition of affective states with a wearable Brain-Computer Interface

Romani, Michele (2022) Music-Emotion : towards automated real-time recognition of affective states with a wearable Brain-Computer Interface.

[img] PDF
13MB
Abstract:This research set out to investigate the feasibility of performing Emotion-Recognition using Melomind, a wearable neural interface manufactured by myBrainTechnologies. Melomind is capable of recording EEG signals, that can be processed using machine learning algorithms in the form of a classification task of the emotional dimensions of valence and arousal. This study introduces the fields of Brain-Computer Interfaces and Affective Computing, the perception of the market, the leading companies producing wearable neural devices for non-clinical applications and the relevance of studying emotions using music, from both the perspectives of market demand and enhancing the user experience. The goal of this research was to evaluate Melomind’s capabilities for a future real-time application that can be used to perform Emotion-Recognition. In order to do so, the Valence-Arousal model by James Russel was used as metric for the dimensions of emotions, then several models of emotional correlates in brain activity were evaluated to define what features of the EEG would be more suitable for the task. The relevant related work was reviewed and studied to provide a methodological framework for the machine learning task that could be adapted to the constraints imposed by the limited hardware of the Melomind. An experimental protocol was designed around the inherent advantages of wearable technologies to collect a dataset with continuous labelling of emotions on the Valence-Arousal coordinate system. Possible biases caused by listening conditions, data labelling tools, emotional interference, multiple cognitive tasks and external factors were taken into account and the protocol was tested during a pilot week with employees of myBrainTechnologies prior to the real experiment. Data were collected using a robust protocol in two different conditions for music listening: eyes-open with a labelling task and eyes-closed solely. Data were then processed using a lightweight automated preprocessing pipeline and two types of features were extracted from the Power Spectra Density of the EEG signal: neuromarkers and frequency-band specific spectral properties calculated in the Theta, Alpha and Beta bands of the EEG signal. Features dimensionality was reduced through features extraction using Principal Component Analysis and the classification task was performed with subject-dependent strategy. The problem was simplified into two separate binary classifications tasks for valence and arousal, and two supervised learning algorithms were tested: Support-Vector Machines and Multi- Layer Perceptron. The hyper-parameters were tuned using GridSearch to select the configuration that yielded the highest Matthews Correlation Coefficient score for each participant, a coefficient that is gaining popularity in machine learning research thanks to its higher reliability. All models were then trained and tested using 5-fold leave-one-block-out crossvalidation that produced two cross-validated scores on the training datasets: CV accuracy and CV MCC. Then, models were further tested on a completely unseen split of data that produced two more scores: test accuracy and MCC. Results were collected and the two classification methods were compared with each other and then with the comparable related work. Some models showed promising classification results, reaching 80% accuracy in arousal classification and 75% accuracy in valence classification with both SVM and MLP. MCC scores confirmed an average positive learning capability of the models, although many models ended up overfitting or underfitting. The average classification results did not meet the initial expectations and are below many of the related studies, suggesting that the adopted lightweight pre-processing, the limited hardware of the Melomind or a combination of both are hindering the classification task and are not yet suitable for real-time Emotion-Recognition. The final discussion covers the current challenges of real-time Emotion-Recognition reported by this and related studies and delves into possible improvement of the emotional self-reporting, the features selection, the artifacts cleaning process and the requirements to move from subject-dependent classification to subject independentclassification. In the conclusion, some considerations are raised from answering the research questions and then an improved artifact cleaning approach is recommended for a follow-up study using the same dataset, that could give further insights on the development of a wearable affective Brain-Computer Interface using Melomind.
Item Type:Essay (Master)
Faculty:EEMCS: Electrical Engineering, Mathematics and Computer Science
Programme:Interaction Technology MSc (60030)
Link to this item:https://purl.utwente.nl/essays/89647
Export this item as:BibTeX
EndNote
HTML Citation
Reference Manager

 

Repository Staff Only: item control page