Loading

A Real Time Malaysian Sign Language Detection Algorithm Based on YOLOv3
Mohamad Amar Mustaqim Mohamad Asri1, Zaaba Ahmad2, Itaza Afiani Mohtar3, Shafaf Ibrahim4

1Mohamad Amar Mustaqim Mohamad Asri, Faculty of Computer and Mathematical Sciences, Universiti Teknologi MARA, Perak Branch Tapah Campus, Tapah Road, Perak, Malaysia.
2Zaaba Ahmad, Faculty of Computer and Mathematical Sciences, Universiti Teknologi MARA, Perak Branch Tapah Campus, Tapah Road, Perak, Malaysia.
3Itaza Afiani Mohtar, Faculty of Computer and Mathematical Sciences, Universiti Teknologi MARA, Perak Branch Tapah Campus, Tapah Road, Perak, Malaysia.
4Shafaf Ibrahim, Faculty of Computer and Mathematical Sciences, Universiti Teknologi MARA, Melaka Branch Jasin Campus, Merlimau, Melaka, Malaysia.
Manuscript received on 11 October 2019 | Revised Manuscript received on 20 October 2019 | Manuscript Published on 02 November 2019 | PP: 651-656 | Volume-8 Issue-2S11 September 2019 | Retrieval Number: B11020982S1119/2019©BEIESP | DOI: 10.35940/ijrte.B1102.0982S1119
Open Access | Editorial and Publishing Policies | Cite | Mendeley | Indexing and Abstracting
© The Authors. Blue Eyes Intelligence Engineering and Sciences Publication (BEIESP). This is an open access article under the CC-BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)

Abstract: Sign language is a language that involves a movement of hand gestures. It is a medium for the hearing impaired person (deaf or mute) to communicate with others. However, in order to communicate with the hearing impaired person, the communicator has to have knowledge in sign language. This is to ensure that the message delivered by the hearing impaired person is understood. This project proposes a real time Malaysian sign language detection based on the Convolutional Neural Network (CNN) technique utilizing the You Only Look Once version 3 (YOLOv3) algorithm. Sign language images from web sources and recorded sign language videos by frames were collected. The images were labelled either alphabets or movements. Once the preprocessing phase was completed, the system was trained and tested on the Darknet framework. The system achieved 63 percent accuracy with learning saturation (overfitting) at 7000 iterations. Once it is successfully conducted, this model will be integrated with other platform in the future such as mobile application.
Keywords: Convolutional Neural Network (CNN), Sign Language Translation, YOLO.
Scope of the Article: Real-Time Information Systems