Università degli Studi di Udine OpenUniud - Archivio istituzionale delle tesi di dottorato
 

OpenUniud - Archivio istituzionale delle tesi di dottorato >
Udine Thesis Repository >
01 - Tesi di dottorato >

Utilizza questo identificativo per citare o creare un link a questo documento: http://hdl.handle.net/10990/576

Autori: Rabiei, Mohammad
Supervisore afferente all'Università: GASPARETTO, ALESSANDRO
Centro di ricerca: DIPARTIMENTO INGEGNERIA ELETTRICA GESTIONALE MECCANICA - DIEG
Titolo: A system for recognizing human emotions based on speech analysis and facial feature extraction: applications to Human-Robot Interaction
Abstract (in inglese): With the advance in Artificial Intelligence, humanoid robots start to interact with ordinary people based on the growing understanding of psychological processes. Accumulating evidences in Human Robot Interaction (HRI) suggest that researches are focusing on making an emotional communication between human and robot for creating a social perception, cognition, desired interaction and sensation. Furthermore, robots need to receive human emotion and optimize their behavior to help and interact with a human being in various environments. The most natural way to recognize basic emotions is extracting sets of features from human speech, facial expression and body gesture. A system for recognition of emotions based on speech analysis and facial features extraction can have interesting applications in Human-Robot Interaction. Thus, the Human-Robot Interaction ontology explains how the knowledge of these fundamental sciences is applied in physics (sound analyses), mathematics (face detection and perception), philosophy theory (behavior) and robotic science context. In this project, we carry out a study to recognize basic emotions (sadness, surprise, happiness, anger, fear and disgust). Also, we propose a methodology and a software program for classification of emotions based on speech analysis and facial features extraction. The speech analysis phase attempted to investigate the appropriateness of using acoustic (pitch value, pitch peak, pitch range, intensity and formant), phonetic (speech rate) properties of emotive speech with the freeware program PRAAT, and consists of generating and analyzing a graph of speech signals. The proposed architecture investigated the appropriateness of analyzing emotive speech with the minimal use of signal processing algorithms. 30 participants to the experiment had to repeat five sentences in English (with durations typically between 0.40 s and 2.5 s) in order to extract data relative to pitch (value, range and peak) and rising-falling intonation. Pitch alignments (peak, value and range) have been evaluated and the results have been compared with intensity and speech rate. The facial feature extraction phase uses the mathematical formulation (Bézier curves) and the geometric analysis of the facial image, based on measurements of a set of Action Units (AUs) for classifying the emotion. The proposed technique consists of three steps: (i) detecting the facial region within the image, (ii) extracting and classifying the facial features, (iii) recognizing the emotion. Then, the new data have been merged with reference data in order to recognize the basic emotion. Finally, we combined the two proposed algorithms (speech analysis and facial expression), in order to design a hybrid technique for emotion recognition. Such technique have been implemented in a software program, which can be employed in Human-Robot Interaction. The efficiency of the methodology was evaluated by experimental tests on 30 individuals (15 female and 15 male, 20 to 48 years old) form different ethnic groups, namely: (i) Ten adult European, (ii) Ten Asian (Middle East) adult and (iii) Ten adult American. Eventually, the proposed technique made possible to recognize the basic emotion in most of the cases.
Parole chiave: Action Units (AUs); Emotion recognition; Facial expression; Human-robot Interaction; Speech analysis
MIUR : Settore ING-IND/13 - Meccanica Applicata Alle Macchine
Lingua: eng
Data: 8-apr-2015
Corso di dottorato: Dottorato di ricerca in Ingegneria industriale e dell'informazione
Ciclo di dottorato: 27
Università di conseguimento titolo: Università degli Studi di Udine
Luogo di discussione: Udine
Citazione: Rabiei, M. A system for recognizing human emotions based on speech analysis and facial feature extraction: applications to Human-Robot Interaction. (Doctoral Thesis, Università degli Studi di Udine, 2015).
In01 - Tesi di dottorato

Full text:

File Descrizione DimensioniFormatoConsultabilità
FINAL 2015.pdfA system for feature classification of emotions; applications to human-robot interaction6,4 MBAdobe PDFVisualizza/apri


Tutti i documenti archiviati in DSPACE sono protetti da copyright. Tutti i diritti riservati.


Segnala questo record su
Del.icio.us

Citeulike

Connotea

Facebook

Stumble it!

reddit


 

  ICT Support, development & maintenance are provided by CINECA. Powered on DSpace SoftwareFeedback CINECA