Parallel Shared Hidden Layers Auto-encoder as a Cross-Corpus Transfer Learning Approach for Unsupervised Persian Speech Emotion Recognition

  1. Department of Electrical and Computer Engineering, Science and Research Branch, Islamic Azad University, Tehran, Iran
  2. Speech Processing Laboratory, Department of Computer Engineering, Sharif University of Technology, Tehran, Iran

Revised: 2021-04-24

Accepted: 2021-05-14

Published in Issue 2021-12-01

How to Cite

Pourebrahim, Y., Razzazi, F., & Sameti, H. (2021). Parallel Shared Hidden Layers Auto-encoder as a Cross-Corpus Transfer Learning Approach for Unsupervised Persian Speech Emotion Recognition. Signal Processing and Renewable Energy (SPRE), 5(4), 83-106. https://oiccpress.com/spre/article/view/7857

PDF views: 193

Abstract

Detecting emotions from speech is one of the challenging topics in speech signal processing, especially in low resource languages. Extracting common features between the training and testing set, using unsupervised method, can solve the inconsistency difficulty between training and test data. In this study, a new auto-encoder based structure is proposed as a new unsupervised method for domain adaptation. To this end, the proposed structure is made of shared encoders to learn common feature representations, shared across the source and the target domain datasets to minimize the discrepancy between them. In order to evaluate the performance of the proposed method, five generally available databases in different languages were used as training and testing datasets. Results on various scenarios demonstrated that the proposed method improves the classification performance significantly compared to the baseline and state of the art unsupervised domain adaptation methods for emotional speech recognition. As an example, the proposed method improved the emotion recognition rate in Persian emotional speech dataset (PESD) by 8% compared to cross corpus training when the source training set is EMOVO.