TY - JOUR
T1 - Non-Audible Speech Classification Using Deep Learning Approaches
AU - Fernandes, Rommel
AU - Huang, Lei
AU - Vejarano, Gustavo
N1 - Research advancement of human-computer interaction (HCI) has recently been made to help post-stroke victims dealing with physiological problems such as speech impediments due to aphasia. This paper investigates different deep learning approaches used for non-audible speech recognition using electromyography (EMG) signals with a novel approach employing continuous wavelet transforms (CWT) and convolutional neural networks (CNNs).
PY - 2019/12/1
Y1 - 2019/12/1
N2 - Research advancement of human-computer interaction (HCI) has recently been made to help post-stroke victims dealing with physiological problems such as speech impediments due to aphasia. This paper investigates different deep learning approaches used for non-audible speech recognition using electromyography (EMG) signals with a novel approach employing continuous wavelet transforms (CWT) and convolutional neural networks (CNNs). To compare its performance with other popular deep learning approaches, we collected facial surface EMG bio-signals from subjects with binary and multi-class labels, trained and tested four models, including a long-short term memory(LSTM) model, a bi-directional LSTM model, a 1-D CNN model, and our proposed CWT-CNN model. Experimental results show that our proposed approach performs better than the LSTM models, but is less efficient than the 1-D CNN model on our collected data set. In comparison with previous research, we gained insights on how to improve the performance of the model for binary and multi-class silent speech recognition.
AB - Research advancement of human-computer interaction (HCI) has recently been made to help post-stroke victims dealing with physiological problems such as speech impediments due to aphasia. This paper investigates different deep learning approaches used for non-audible speech recognition using electromyography (EMG) signals with a novel approach employing continuous wavelet transforms (CWT) and convolutional neural networks (CNNs). To compare its performance with other popular deep learning approaches, we collected facial surface EMG bio-signals from subjects with binary and multi-class labels, trained and tested four models, including a long-short term memory(LSTM) model, a bi-directional LSTM model, a 1-D CNN model, and our proposed CWT-CNN model. Experimental results show that our proposed approach performs better than the LSTM models, but is less efficient than the 1-D CNN model on our collected data set. In comparison with previous research, we gained insights on how to improve the performance of the model for binary and multi-class silent speech recognition.
UR - https://digitalcommons.lmu.edu/cs_fac/25/
U2 - 10.1109/CSCI49370.2019.00118
DO - 10.1109/CSCI49370.2019.00118
M3 - Article
SP - 630
EP - 634
JO - 2019 International Conference on Computational Science and Computational Intelligence (CSCI)
JF - 2019 International Conference on Computational Science and Computational Intelligence (CSCI)
ER -