Loading

Unconstrained Ear Recognition through Domain Adaptive Deep Learning Models of Convolutional Neural Network
Marwin Alejo1, Cris Paulo Hate2 

1Marwin B. Alejo, Department of Graduate Studies, Technological Institute of the Philippines, Quezon City, Philippines.
2Cris Paulo G. Hate, Department of Graduate Studies, Technological Institute of the Philippines, Quezon City, Philippines.

Manuscript received on 20 March 2019 | Revised Manuscript received on 25 March 2019 | Manuscript published on 30 July 2019 | PP: 3143-3150 | Volume-8 Issue-2, July 2019 | Retrieval Number: B2865078219/19©BEIESP | DOI: 10.35940/ijrte.B2865.078219
Open Access | Ethics and Policies | Cite | Mendeley | Indexing and Abstracting
© The Authors. Blue Eyes Intelligence Engineering and Sciences Publication (BEIESP). This is an open access article under the CC-BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)

Abstract: Limited ear dataset yields to the adaption of domain adaptive deep learning or transfer learning in the development of ear biometric recognition. Ear recognition is a variation of biometrics that is becoming popular in various areas of research due to the advantages of ears towards human identity recognition. In this paper, handpicked CNN architectures: AlexNet, GoogLeNet, Inception-v3, Inception-ResNet-v2, ResNet-18, ResNet-50, SqueezeNet, ShuffleNet, and MobileNet-v2 are explored and compared for use in an unconstrained ear biometric recognition. 250 unconstrained ear images are collected and acquired from the web through web crawlers and are preprocessed with basic image processing methods including the use of contrast limited adaptive histogram equalization for ear image quality improvement. Each CNN architecture is analyzed structurally and are fine-tuned to satisfy the requirements of ear recognition. Earlier layers of CNN architectures are used as feature extractors. Last 2-3 layers of each CNN architectures are fine-tuned thus, are replaced with layers of the same kind for ear recognition models to classify 10 classes of ears instead of 1000. 80 percent of acquired unconstrained ear images is used for training and the remaining 20 percent is reserved for testing and validation. Results of each architectures are compared in terms of their training time, training and validation outputs as such learned features and losses, and test results in terms of above-95% accuracy confidence. Above all the used architectures, ResNet, AlexNet, and GoogleNet achieved an accuracy confidence of 97-100% and is best for use in unconstrained ear biometric recognition while ShuffleNet, despite of achieving approximately 90%, shows promising result for use in mobile version of unconstrained ear biometric recognition.
Index Terms: Ear Recognition, Domain Adaptive Deep Learning, Convolutional Neural Network, Transfer Learning.

Scope of the Article: Deep Learning