Naturalistic Music Decoding from EEG Data via Latent Diffusion Models

arXiv

Abstract

In this article, we explore the potential of using latent diffusion models, a family of powerful generative models, for the task of reconstructing naturalistic music from electroencephalogram (EEG) recordings. Unlike simpler music with limited timbres, such as MIDI-generated tunes or monophonic pieces, the focus here is on intricate music featuring a diverse array of instruments, voices, and effects, rich in harmonics and timbre. This study represents an initial foray into achieving general music reconstruction of high-quality using non-invasive EEG data, employing an end-to-end training approach directly on raw data without the need for manual pre-processing and channel selection. We train our models on the public NMED-T dataset and perform quantitative evaluation proposing neural embedding-based metrics. We additionally perform song classification based on the generated tracks. Our work contributes to the ongoing research in neural decoding and brain-computer interfaces, offering insights into the feasibility of using EEG data for complex auditory information reconstruction.

Decoding examples - ControlNet Subject 2

Track 23 - Test chunk

GT

EEG Decoded

Track 26 - Test chunk

GT

EEG Decoded

Track 27 - Test chunk

GT

EEG Decoded

Track 28 - Test chunk

GT

EEG Decoded

Decoding examples - ConvNet Subject 2

Track 23 - Test chunk

GT

EEG Decoded

Track 26 - Test chunk

GT

EEG Decoded