Loading

Open Education Resource for School Children with Down Syndrome
Kotur Guna Pragna1, Dindi Dhanunjai2
1Kotur Guna Pragna, Computer Science Engineering, Vellore Institute of Technology, Vellore, India.
2Dindi Dhanunjai, Computer Science Engineering, Vellore Institute of Technology, Vellore, India.

Manuscript received on November 17., 2019. | Revised Manuscript received on November 24 2019. | Manuscript published on 30 November, 2019. | PP: 11945-11948 | Volume-8 Issue-4, November 2019. | Retrieval Number: D9906118419/2019©BEIESP | DOI: 10.35940/ijrte.D9906.118419

Open Access | Ethics and Policies | Cite  | Mendeley | Indexing and Abstracting
© The Authors. Blue Eyes Intelligence Engineering and Sciences Publication (BEIESP). This is an open access article under the CC-BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)

Abstract: Action recognition in video sequences is a challenging problem of computer vision due to the similarity of visual contents, changes in the viewpoint for the same actions, camera motion with action performer, scale and pose of an actor, and different illumination conditions. Also, there is no designated action recognition model for hazy videos. This model proposes a novel unified and unique model for action recognition in haze built with Convolutional Neural Network (CNN) and deep bidirectional LSTM (DB-LSTM) network. First, every frame of the hazy video is feed into the AOD-Net (All-in-One Dehazing Network). Next, deep features are extracted from every sampled dehazed frame by using VGG-16, which helps reduce the redundancy and complexity. Later, the sequential and temporal information among frame features is learnt using DB-LSTM network, where multiple layers are stacked together in both the forward and backward passes of DB-LSTM to increase its depth. The proposed unified method is capable of learning long term sequences and can process lengthy videos (even hazy videos) in real time by analyzing features for a certain time interval. Experimental results on both synthesized and natural video datasets show decent results on par with other state of the art methods in action recognition using the proposed method on the benchmark data set UCF-101. This helps the Down Syndrome Students to recognize an action faster.
Keywords: CNN, Haze, Bidirectional LSTM, Deep Learning.
Scope of the Article: Deep Learning.