Improved Phoneme-Based Myoelectric Speech Recognition
Abstract
This paper introduces an adaptive phoneme-based multi-expert speech recognition system using the myoelectric signal (MES). The MES produced by the speaker’s facial muscles can be used as another expert system to enhance recognition accuracy in noisy situations. In previous work, ten words are recognized by phoneme-based classifier. In the current study, an expanded set of words has been classified phonemically by an HMM classifier trained at the phoneme level using a subset of all the words. The raw MES signals are rotated by class-specific rotation matrices to spatially decorrelate the measured data prior to feature extraction. In a post-processing stage, an uncorrelated linear discriminant analysis (ULDA) is used for dimensionality reduction. The resulting data are classified through an HMM classifier to obtain the phonemic log likelihoods, which are mapped to corresponding words using an artificial neural network. It is shown that these methods provide a recognition accuracy of 89% when classifying an expanded lexicon containing the same phonemes as the ones used by the training set. As a result, the new words are recognized from the phoneme structure without retraining the HMM classifier.