Loading

Emotion Recognition of Manipuri Speech using Convolution Neural Network
Gurumayum Robert Michael1, Aditya Bihar Kandali2

1G.R.Michael, Dept. of ECE, Dibrugarh University Dibrugarh , India.
2Dr Aditya Bihar Kandali., Electrical Department, Jorhat Engineering college, Jorhat, India.

Manuscript received on April 30, 2020. | Revised Manuscript received on May 06, 2020. | Manuscript published on May 30, 2020. | PP: 2364-2366 | Volume-9 Issue-1, May 2020. | Retrieval Number: F9896038620/2020©BEIESP | DOI: 10.35940/ijrte.F9896.059120
Open Access | Ethics and Policies | Cite | Mendeley
© The Authors. Blue Eyes Intelligence Engineering and Sciences Publication (BEIESP). This is an open access article under the CC-BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)

Abstract: Over the recent years much advancement are made in terms of artificial intelligence, machine learning, human-machine interaction etc. Voice interaction with the machine or giving command to it to perform a specific task is increasingly popular. Many consumer electronics are integrated with SIRI, Alexa, cortana, Google assist etc. But machines have limitation that they cannot interact with a person like a human conversational partner. It cannot recognize Human Emotion and react to them. Emotion Recognition from speech is a cutting edge research topic in the Human machines Interaction field. There is a demand to design a more rugged man-machine communication system, as machines are indispensable to our lives. Many researchers are working currently on speech emotion recognition(SER) to improve the man machines interaction. To achieve this goal, a computer should be able to recognize emotional states and react to them in the same way as we humans do. The effectiveness of the speech emotion recognition(SER) system depends on quality of extracted features and the type of classifiers used . In this paper we tried to identify four basic emotions: anger, sadness, neutral, happiness from speech. Here we used audio file of short Manipuri speech taken from movies as training and testing dataset . This paper use CNN to identify four different emotions using MFCC (Mel Frequency Cepstral Coefficient )as features extraction technique from speech.
Keywords: CNN, emotion recognition, Human Machine interface, MFCC.
Scope of the Article: Convolution Neural Network