Development of a laryngeal surface electromyographic system for an efficient neurally-controlled communication interface
Brain-computer interface (BCI) has been applied with increasing frequency andsuccess in recent years for the purposes of neural rehabilitation, augmentation and automated physical therapy. Although there have been significant achievements with repair, replacement and enhancement of motor control and neurological prostheses, similar advancement in the field for communication prostheses has been limited. Current BCI systems for communication have yet to attain reliable, indefatigable control by their users with information transfer rates superior or comparable to rudimentary binary switches. We hypothesize that utilizing non-invasive surface electromyographic recordings (sEMG) of the fine motor movement of the larynx and articulatory muscles could contribute to a robust BCI communication and biofeedback system for multi-purposed applications. This approach directly accesses areas corresponding to phonetically-driven language output, potentially resulting in a more efficient and natural synthesis of speech in the presence of an intact recurrent laryngeal nerve and its corresponding motor neurons. Preliminary studies by variou research groups demonstrate that even mimed or low-energy “sub-vocal” movement can be capable of sEMG language transcription. By bootstrapping microphone-recorded speech with the filtered sEMG signal of dictated phonemes and sentences by healthy General American English-accented native speakers and applying classification algorithms to sEMG recordings, we have derived an encyclopedia of user-independent laryngeal movement corresponding to the International Phonetic Alphabet (IPA). Preliminary results in testing of the system depict robust (>90%) accuracy in categorizing speaker-independent sEMG phonetic results. Current work comprises real-time categorization and phonetic transcription of sEMG output and biofeedback presentation using naive subjects.