Loading

Optimization of Deep Learning using various Optimizers, Loss functions and Dropout
S. V. G. Reddy1, K. Thammi Reddy2, V. Valli Kumari3

1S. V. G. Reddy, Associate Professor, Department of CSE, GIT, GITAM University, Hyderabad (Telangana), India.
2K. Thammi Reddy, Professor, Department of CSE, GIT, GITAM University, Hyderabad (Telangana), India.
3V. Valli Kumari, Professor, Department of CS & SE, College of Engineering, Andhra University, (Andhra Pradesh), India.
Manuscript received on 17 December 2018 | Revised Manuscript received on 29 December 2018 | Manuscript Published on 24 January 2019 | PP: 448-455 | Volume-7 Issue-4S2 December 2018 | Retrieval Number: ES2099017518/19©BEIESP
Open Access | Editorial and Publishing Policies | Cite | Mendeley | Indexing and Abstracting
© The Authors. Blue Eyes Intelligence Engineering and Sciences Publication (BEIESP). This is an open access article under the CC-BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)

Abstract: Deep Learning is gaining lot of prominence due to its break through results in various fields like Computer Vision, Natural Language Processing, Time Series Analysis, Health Care etc. Earlier, the Deep Learning was implemented using the batch and stochastic gradient descent algorithms and some optimizers which lead to very less performance of the models. But today, lot of work is going on for the enhancement of the performance of Deep Learning using various optimization techniques. So, in this context, It is proposed to build a Deep Learning model using various Optimizers (Adagrad, RmsProp, Adam), Loss functions (mean squared error, binary cross entropy) and Dropout concept for the Convolutional neural networks and Recurrent neural networks and verify the performance such as Accuracy and Loss of the model. The proposed model has achieved maximum Accuracy when Adam optimizer and mean squared error loss function are applied on convolutional neural networks and the model is run with minimum Loss when the same Adam optimizer and mean squared error loss function are applied on Recurrent neural networks. While performing the Regularization of the model, the maximum Accuracy is achieved when the Dropout with a minimum fraction ‘p’ of nodes is applied on convolutional neural networks and the model has run with minimum Loss when the same dropout value is applied on Recurrent neural networks.
Keywords: Deep Learning, Convolutional Neural Networks, CNN, Recurrent Neural Networks, RNN, Computer Vision, Natural Language Processing, Time Series Analysis.
Scope of the Article: Deep Learning