Previous [ 1] [ 2] [ 3] [ 4] [ 5] [ 6] [ 7] [ 8] [ 9] [ 10] [ 11] [ 12] [ 13] [ 14] [ 15] [ 16] [ 17] [ 18] [ 19]


Journal of Information Science and Engineering, Vol. 30 No. 6, pp. 1927-1944 (November 2014)

Bird Species Identification Based on Timbre and Pitch Features of Their Vocalization*

1Department of Electronic Engineering
Graduate Institute of Computer and Communication Engineering
National Taipei University of Technology
Taipei, 106 Taiwan
2Department of Computer Science and Information Engineering
Hungkuang University
Taichung, 433 Taiwan

Although wild bird watching has been a popular leisure activity, it is often the case that people have no idea about what kind of bird species they see or hear. To help people learn to identify bird species, this study proposes an automatic bird sound identification system. Considering bird vocalization can be generally divided into two categories, namely call and song, the proposed system is built upon a two-stage identification framework. The first stage performs call/song classification. If an unknown sound clip is classified as a call, it is then handled by a call identifier in the second stage; otherwise, it is handled by a song identifier. Both identifiers use two acoustic features, timbre and pitch, to determine which of the bird species the sound clip belong to. In using timbre features, bird sounds are converted into Mel-Frequency Cepstral Coefficients and their first derivatives and then analyzed using Gaussian mixture models. In using pitch feature, we convert bird sounds into MIDI note sequences and then use bigram models to analyze the dynamic change information of the notes. Our experiments, conducted using a database including twenty common bird species in the Taipei urban area, show that the proposed system can achieve 90.4% accuracy.

Keywords: bird call, bird song, bird species, bird sound identification

Full Text () Retrieve PDF document (201411_14.pdf)

Received December 5, 2012; revised February 22, June 27 & August 27, 2013; accepted September 26, 2013.
Communicated by Chia-Feng Juang.
* This work was supported in part by the National Science Council, Taiwan, under Grant No. NSC 101-2221- E-241-017-.