Many research works on sign language recognition using electromyography (EMG) and inertial signals have been reported. In this study, sign language recognition was performed using two armband modules consisting of 8-channel EMG and one inertial sensor, and the effect of sensor re-wearing on EMG-based sign language recognition was determined. Five non-deaf and four deaf subjects performed sign language for 40 and 19 Korean language words, respectively. For each word, EMG signals and inertial data were measured using two armband modules around the left and the right forearms. Every sign was repeated five times and the entire experiment was repeated five times by re-wearing the modules for two weeks. Mean average value, Wilson amplitude, and zero crossing were selected as the time domain features of EMG signals in an artificial neural network (ANN). The results showed that the classification accuracy significantly improved as the amount of training data increased. The average accuracy was only 54.69 % when training without considering sensor position, but became 89.19 % after training the data obtained by undertaking sensor re-wearing four times.
|Number of pages||6|
|Journal||Transactions of the Korean Society of Mechanical Engineers, B|
|State||Published - Mar 2020|
- Artificial Neural Network
- Sign Language