Synergizing Spectrotemporal Dynamics and Filterbank Common Spatial Pattern for Decoding Imagined Speech from EEG

Authors

  • Fatemeh Bagheri Institute of Biomedical Engineering, University of Toronto, Toronto, Canada
  • Behrad TaghiBeyglou Institute of Biomedical Engineering, University of Toronto

Keywords:

Brain-computer interface, Imagined speech, EEG decoding, Time-frequency analysis

Abstract

In this study, we investigated the feasibility of using spectrotemporal features, including 5th-order autoregressive model coefficients, the second norm of the 3rd-level discrete wavelet transform, and overall energy encompassed in the power spectral density, in conjunction with filterbank common spatial pattern (FBCSP) to classify three imagined words (rock, paper, and scissor) using electroencephalogram (EEG) signals. The dataset that is used in this research was released in the 3rd Iranian BCI competition (iBCIC2020) and was recorded from 15 right-handed healthy adults. Our results indicate that the average accuracy and Cohen's kappa on all participants is 44.37± 5.04% and 0.42 ± 0.07, respectively, which is superior in terms of kappa value compared to the previous literature (average accuracy of 51.2% and kappa of 0.36).

Downloads

Published

2024-06-26

How to Cite

[1]
F. Bagheri and B. TaghiBeyglou, “Synergizing Spectrotemporal Dynamics and Filterbank Common Spatial Pattern for Decoding Imagined Speech from EEG”, CMBES Proc., vol. 46, Jun. 2024.

Issue

Section

Abstracts