Press Ctrl+P to print from browser
CITI--Music Emotion Recognition
- LecturerDr. Yi-Hsuan Yang (National Taiwan University)
Host: Dr. Wen-Huang Cheng
- Time2011-03-14 (Mon.) 10:30 – 12:00
- LocationAuditorium 106 at new IIS Building
This talk provides a comprehensive introduction of the research on modeling human's emotion perception of music, a research topic that emerges in the face of the explosive growth of digital music. Automatic recognition of the perceived emotion of music allows users to retrieve and organize their music collections in a fashion that is more content-centric than conventional metadata-based methods. Building such a music emotion recognition system, however, is challenging because of the subjective nature of emotion perception. One needs to deal with issues such as the reliability of ground truth data and the difficulty of evaluating the prediction result, which do not exist in other pattern recognition problems such as face recognition and speech recognition. This talk provides the details of the methods that have been developed to address the issues related to the ambiguity and granularity of emotion description, the heavy cognitive load of emotion annotation, the subjectivity of emotion perception, and the semantic gap between low-level audio signal and high-level emotion perception.
Dr. Yi-Hsuan Yang received the Ph.D. degree in Communication Engineering from National Taiwan University, Taiwan, in 2010. His research interests include multimedia signal processing, music information retrieval, machine learning, and affective computing. He has published over 30 technical papers in the above areas. He is also the author of the book Music Emotion Recognition. He has been awarded the Microsoft Research Asia Fellowship in 2008, MediaTek Fellowship in 2009, and the Best Ph.D. Dissertation Award of Taiwanese Association for Artificial Intelligence in 2010.