Loading

Deep Learning for Human Activity Recognition using on-Node Sensors
Rebba Pravallika1, Devavarapu Sreenivasarao2, Shaik Khasim Saheb3

1Rebba Pravallika is currently pursuing B.Tech Degree program in Computer Science & Engineering in Sreenidhi Institute of Science and Technology, Affiliated to Jawaharlal Nehru Technical University Hyderabad, Telangana, India.
2Devavarapu Sreenivasarao is currently working an Assistant Professor in Computer Science & Engineering Department in Sreenidhi Institute of Science and Technology and his area research includes Medical Image Processing, Machine Learning.
3Shaik Khasim Saheb is currently working as Assistant Professor in Computer Science & Engineering Department in Sreenidhi Institute of Science and Technology, and his area research includes Medical Image Processing, Machine Learning.
Manuscript received on January 02, 2020. | Revised Manuscript received on January 15, 2020. | Manuscript published on January 30, 2020. | PP: 607-614 | Volume-8 Issue-5, January 2020. | Retrieval Number: E5654018520/2020©BEIESP | DOI: 10.35940/ijrte.E5654.018520

Open Access | Ethics and Policies | Cite | Mendeley
© The Authors. Blue Eyes Intelligence Engineering and Sciences Publication (BEIESP). This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)

Abstract: Due to advancement in technology, availability of resources and by increased utilization of on node sensors enormous amount of data is obtained. There is a necessity of analyzing and classifying this physiological information by efficient and effective approaches such as deep learning and artificial intelligence. Human Activity Recognition (HAR) is assuming a dominant role in sports, security, anti-crime, healthcare and also in environmental applications like wildlife observation etc. Most techniques work well for processing offline instead of real- time processing. There are few approaches which provide maximum accuracy for real time processing of large-scale data, one of the compromising approaches is deep learning. Limitation of resources is one of the causes to restrict the usage of deep learning for low power devices which can be worn on our body. Deep learning implementations are known to produce precise results for different computing systems.We suggest a deep learning approach in this paper which integrates features and data learned from inertial sensors with complementary knowledge obtained from a collection of shallow features which generates the possibility of performing real time activity classification accurately. Eliminating the obstructions caused by using deep learning methods for real-time analysis is the aim of this integrated design. Before passing the data into the deep learning framework, we perform spectral analysis to optimize the planned methodology for on-node computation. The accuracy obtained by combined approach is tested by utilizing datasets obtained from laboratory and real world controlled and uncontrolled environment. Our outcomes demonstrate the legitimacy of the methodology on various human action datasets, beating different techniques, including the two strategies utilized inside our consolidated pipeline. We additionally exhibit that our integrated design’s classification times are reliable with on node real-time analysis criteria on smart phones and wearable technology.
Keywords: Deep Learning, Human Activity Recognition, ActiveMiles, Low power devices, Shallow Features.
Scope of the Article: Deep Learning.