Synergizing Spectrotemporal Dynamics and Filterbank Common Spatial Pattern for Decoding Imagined Speech from EEG
Keywords:
Brain-computer interface, Imagined speech, EEG decoding, Time-frequency analysisAbstract
In this study, we investigated the feasibility of using spectrotemporal features, including 5th-order autoregressive model coefficients, the second norm of the 3rd-level discrete wavelet transform, and overall energy encompassed in the power spectral density, in conjunction with filterbank common spatial pattern (FBCSP) to classify three imagined words (rock, paper, and scissor) using electroencephalogram (EEG) signals. The dataset that is used in this research was released in the 3rd Iranian BCI competition (iBCIC2020) and was recorded from 15 right-handed healthy adults. Our results indicate that the average accuracy and Cohen's kappa on all participants is 44.37± 5.04% and 0.42 ± 0.07, respectively, which is superior in terms of kappa value compared to the previous literature (average accuracy of 51.2% and kappa of 0.36).