Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "O14-2004",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T08:04:25.863301Z"
},
"title": "On the Use of Speech Recognition Techniques to Identify Bird Species",
"authors": [
{
"first": "Wei-Ho",
"middle": [],
"last": "Tsai",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National Taipei University of Technology",
"location": {
"addrLine": "No.1, Sec. 3, Chunghsiao E. Rd. Taipei City",
"postCode": "10608",
"country": "Taiwan"
}
},
"email": "whtsai@ntut.edu.tw"
},
{
"first": "Yu-Zhi",
"middle": [],
"last": "Xue",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National Taipei University of Technology",
"location": {
"addrLine": "No.1, Sec. 3, Chunghsiao E. Rd. Taipei City",
"postCode": "10608",
"country": "Taiwan"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Wild bird watching has become a popular leisure activity in recent years. Very often, people can see birds or hear their sounds, but have no idea what kind of bird species they are seeing. To help people learn to identify bird species from their sounds, we apply speech recognition techniques to build an automatic bird sound identification system. In this system, two acoustic cues are used for analysis, timbre and pitch. In the timbre-based analysis, Mel-Frequency Cepstral Coefficients (MFCCs) are used to characterize the bird sound. Then, we use Gaussian Mixture Models to represent the MFCCs as a set of parameters. In the pitch-based analysis, we convert bird sounds from their waveform representations into a sequence of MIDI notes. Then, Bigram models are used to capture the dynamic change information of the notes. We chose the top ten common bird species in the Taipei urban area to examine our system. Experiments conducted using audio data collected from commercial CDs and websites show that the timbre-based, pitch-based, and the combination thereof systems achieve 71.1%, 72.1%, and 75.0% accuracy of bird sound identification, respectively.",
"pdf_parse": {
"paper_id": "O14-2004",
"_pdf_hash": "",
"abstract": [
{
"text": "Wild bird watching has become a popular leisure activity in recent years. Very often, people can see birds or hear their sounds, but have no idea what kind of bird species they are seeing. To help people learn to identify bird species from their sounds, we apply speech recognition techniques to build an automatic bird sound identification system. In this system, two acoustic cues are used for analysis, timbre and pitch. In the timbre-based analysis, Mel-Frequency Cepstral Coefficients (MFCCs) are used to characterize the bird sound. Then, we use Gaussian Mixture Models to represent the MFCCs as a set of parameters. In the pitch-based analysis, we convert bird sounds from their waveform representations into a sequence of MIDI notes. Then, Bigram models are used to capture the dynamic change information of the notes. We chose the top ten common bird species in the Taipei urban area to examine our system. Experiments conducted using audio data collected from commercial CDs and websites show that the timbre-based, pitch-based, and the combination thereof systems achieve 71.1%, 72.1%, and 75.0% accuracy of bird sound identification, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "There are more than nine thousand and seven hundred bird species in the world. Although a number of birds are commonly seen, most people cannot recognize any of them. In this study, we attempt to develop automated techniques for identifying bird species from their sounds. Hereafter, this problem is referred to as bird sound identification. It is hoped that the techniques can help people learn about such animals by simply recording the bird sounds they hear and sending the recording to our system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Up to now, there has been very limited published research devoted to bird sound identification. In (Anderson et al., 1996) , Anderson et al. used dynamic time warping to measure the differences in spectrogram between an unknown bird sound recording and the template bird sound recordings. In (Kogan & Margoliash, 1998) , Kogan et al. compared the performance of bird sound identification obtained with dynamic time warping and hidden Markov model, in which six acoustic features were used: linear predictive coding coefficients (LPCs), LPC-derived cepstral coefficients, LPC reflection, Mel-Frequency Cepstral Coefficients (MFCCs), log mel-filter bank channel, and linear mel-filter bank channel. In (McIlraith & Card, 1997) , McIlraith et al. used a backpropagation neural network and multivariate statistics to perform bird sound identification. The acoustic features tested in (McIlraith & Card, 1997) are the number of syllables, average syllable duration, standard deviation of syllable durations, average pause duration, and standard deviation of pause durations. In (Somervuo et al., 2006) , Somervuo et al. compared three acoustic features on bird sound identification: sinusoidal modeling features, MFCCs, and descriptive features. Nevertheless, it is worth noting that all of the aforementioned studies tackle bird sound identification from the perspective of timbre-based analysis only. They all ignore bird sounds' pitch information, which is an important factor in why a bird sound is often called a bird song.",
"cite_spans": [
{
"start": 99,
"end": 122,
"text": "(Anderson et al., 1996)",
"ref_id": "BIBREF0"
},
{
"start": 125,
"end": 145,
"text": "Anderson et al. used",
"ref_id": null
},
{
"start": 292,
"end": 318,
"text": "(Kogan & Margoliash, 1998)",
"ref_id": "BIBREF4"
},
{
"start": 321,
"end": 333,
"text": "Kogan et al.",
"ref_id": null
},
{
"start": 700,
"end": 724,
"text": "(McIlraith & Card, 1997)",
"ref_id": "BIBREF5"
},
{
"start": 880,
"end": 904,
"text": "(McIlraith & Card, 1997)",
"ref_id": "BIBREF5"
},
{
"start": 1073,
"end": 1096,
"text": "(Somervuo et al., 2006)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "In this work, we propose a bird sound identification system based on timbre and pitch analyses. In addition to applying the most prevalent speaker-identification method to our system, we devise a method for exploiting the pitch information in bird sounds. Our experiments show that bird sound identification based on pitch information performs slightly better than that based on timbre information. It is further observed that combined use of timbre and pitch information achieves superior performance over the use of the individual information.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "The remainder of this paper is organized as follows. Section 2 introduces the configuration of the proposed bird sound system, in which the two major components, timbre-based analysis and pitch-based analysis, are described in Sections 3 and 4, respectively. Section 5 discusses the experiments for examining our system. In Section 6, we present the conclusions and direction of our future works. Figure 1 shows the proposed bird sound identification system. In essence, the system can be divided into two components, namely timbre-based analysis and pitch-based analysis. Both components operate in two phases: training and testing. The purpose of the training phase is to extract the timbre and pitch features in each bird species' sound and to represent the features as two sets of parametric models. In the testing phase, the system takes as input an unknown sound recording and produces as output two likelihood scores from the timbre-based and pitch-based analyses, respectively. The scores then are combined to serve as the basis of the decision. According to the maximum likelihood decision rule, the system decides an unknown sound recording in favor of bird species B * when the condition in Eq. (1) is satisfied:",
"cite_spans": [],
"ref_spans": [
{
"start": 397,
"end": 405,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "1 arg max( ) i i i N B v r \uf061 \uf062 \uf02a \uf0a3 \uf0a3 \uf03d \uf0d7 \uf02b \uf0d7 ,",
"eq_num": "(1)"
}
],
"section": "System Overview",
"sec_num": "2."
},
{
"text": "where N is the number of bird species; v i and r i are the likelihood scores output from the timbre-based and pitch-based analyses with respect to the i-th bird species' models, respectively; and \uf061 and \uf062 are tunable weights. Figure 2 shows the procedure of the timbre-based analysis. It consists of feature extraction and Gaussian mixture modeling in the training phase, along with feature extraction and likelihood computation in the testing phase.",
"cite_spans": [],
"ref_spans": [
{
"start": 225,
"end": 233,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "System Overview",
"sec_num": "2."
},
{
"text": "i \u03bb i \uf04c 1 \u03bb 2 \u03bb N \u03bb 1 \uf04c 2 \uf04c N \uf04c i i i r v s \uf0d7 \uf02b \uf0d7 \uf03d \uf062 \uf061 N i \uf0a3 \uf0a3 1 N i \uf0a3 \uf0a3 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Overview",
"sec_num": "2."
},
{
"text": "Among the timbre-based features investigated in (Kogan & Margoliash, 1998) , the Mel-scale Frequency Cepstral Coefficients (MFCCs) feature (Davis & Mermelstein, 1980) has been found to be superior to the others in bird sound identification. To compute MFCCs, a waveform signal first is divided into frames using a P-length sliding Hamming window with 0.5P-length overlapping between frames. Every frame then undergoes Hamming windowing and fast Fourier transform (FFT) with size J. Next, each frame is passed through a set of triangular filter banks, equally spaced on a Mel scale. Let |A t,j | denote the signal's magnitude with respect to FFT index j in frame t, where 1\uf0a3 j\uf0a3 J. Then,",
"cite_spans": [
{
"start": 48,
"end": 74,
"text": "(Kogan & Margoliash, 1998)",
"ref_id": "BIBREF4"
},
{
"start": 139,
"end": 166,
"text": "(Davis & Mermelstein, 1980)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Extraction",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "2 , , 1 1 log ( ) cos ( 0.5) ,1 b b u B t i t j b b jl i X A T j b i B B B \uf070 \uf03d \uf03d \uf0ec \uf0fc \uf0e6 \uf0f6 \uf0ef \uf0ef \uf0e6 \uf0f6 \uf03d \uf0d7 \uf02d \uf0a3 \uf0a3 \uf0e7 \uf0f7 \uf0ed \uf0fd \uf0e7 \uf0f7 \uf0e7 \uf0f7 \uf0e8 \uf0f8 \uf0ef \uf0ef \uf0e8 \uf0f8 \uf0ee \uf0fe \uf0e5 \uf0e5 ,",
"eq_num": "(2)"
}
],
"section": "Feature Extraction",
"sec_num": "3.1"
},
{
"text": "where B is the total number of filter banks, l b is the lowest frequency index in the b-th bank, u b is the highest frequency index in the b-th bank, and T b (j) is the response of the b-th bank. Briefly, MFCCs represent the short-term power spectrum of a sound, based on a linear cosine transform of a log power spectrum on a nonlinear mel scale of frequency. It is found that the nonlinear mel scale of frequency approximates the human auditory system's response more closely than the linearly-spaced frequency bands used in the regular cepstrum. Figure 2 . The procedure of the timbre-based analysis.",
"cite_spans": [],
"ref_spans": [
{
"start": 549,
"end": 557,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Feature Extraction",
"sec_num": "3.1"
},
{
"text": "i \u03bb 2 \u03bb N \u03bb 1 \u03bb ) \u03bb | Pr( 2 X ) \u03bb | Pr( 1 X ) \u03bb | Pr( N X",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Extraction",
"sec_num": "3.1"
},
{
"text": "To capture the collective sound characteristics of each bird species, all of the MFCCs of each bird species are pooled together to form a Gaussian mixture model (GMM) (Reynolds & Rose, 1995) . It is assumed that each bird species has its own timbre pattern that reflects in the distribution of MFCCs over a span of time. A GMM approximates the static timbre patterns by a mixture of Gaussian densities. Note that the reason we capture the static timbre patterns rather than dynamic timbre patterns using hidden Markov models (HMMs) (Rabiner, 1989) is to prevent the resulting models from dependence on bird individuals or bird messages.",
"cite_spans": [
{
"start": 167,
"end": 190,
"text": "(Reynolds & Rose, 1995)",
"ref_id": "BIBREF8"
},
{
"start": 532,
"end": 547,
"text": "(Rabiner, 1989)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Gaussian Mixture Modeling",
"sec_num": "3.2"
},
{
"text": "The parameters of a GMM consist of means, covariances, and mixture weights, which are commonly estimated using the Expectation-Maximization (EM) algorithm (Dempster et al., 1977) . Nevertheless, recognizing that the numbers of each bird species' sound samples for training may not be sufficient always, we use the GMM-MAP approach (Reynolds & Quatieri, 2000) to generate each bird species' GMM. Specifically, all of the MFCCs of all of the bird species first are pooled together to form a universal GMM using the EM algorithm. Then, the parameters of the universal GMM are modified with respect to each bird species using the MFCCs of the individual bird species based on maximum a posteriori (MAP) estimation. If there are N bird species to be identified, we generate N GMMs, 1 2 \u03bb ,\u03bb , ,\u03bb N \uf04c .",
"cite_spans": [
{
"start": 155,
"end": 178,
"text": "(Dempster et al., 1977)",
"ref_id": "BIBREF2"
},
{
"start": 331,
"end": 358,
"text": "(Reynolds & Quatieri, 2000)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Gaussian Mixture Modeling",
"sec_num": "3.2"
},
{
"text": "Given an unknown bird sound recording, the system computes its MFCCs X = {X 1 , X 2 ,..., X T } before computing the likelihood probability Pr(X|\uf06c j ) for each model \uf06c j :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Likelihood Computation",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\uf028 \uf029 \uf028 \uf029 1 , , , , 1 1 , 1 Pr( \u03bb ) e x p T K j j k t j k j k t j k N t k j k w \uf070 \uf02d \uf03d \uf03d \uf0ec \uf0fc \uf0a2 \uf03d \uf0d7 \uf02d \uf02d \uf02d \uf0ed \uf0fd \uf0ee \uf0fe \uf0d5 \uf0e5 X | X C X C \uf06d \uf06d ,",
"eq_num": "(3)"
}
],
"section": "Likelihood Computation",
"sec_num": "3.3"
},
{
"text": "where K is the number of mixture Gaussian components; w j,k , \uf06d j,k , and C j,k are the k-th mixture weight, mean, and covariance of model \uf06c j , respectively; and prime (\uf0a2) denotes the vector transpose.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Likelihood Computation",
"sec_num": "3.3"
},
{
"text": "As bird sound is often regarded as a type of music, it is reasonable to assume that each bird species has its own pitch pattern that can be exploited to distinguish from other species. Pitch is the reciprocal of fundamental frequency; hence, a bird sound recording can be viewed as a sequence of fundamental frequencies. We then can model the variations of the fundamental frequencies to characterize each bird species' sounds. Nevertheless, considering that the estimation of fundamental frequency is prone to numerical errors, we use MIDI note numbers instead of fundamental frequencies to explore the pitch information in bird sounds. The MIDI note numbers can be treated as the non-linear quantization of fundamental frequencies and can absorb the numerical errors during the estimation of fundamental frequencies. Figure 3 shows the procedure of pitch-based analysis. It consists of MIDI note extraction for converting sound recordings from waveform representations into MIDI note sequences and bigram modeling for characterizing the underlying pitch information in the note sequences. Figure 3 . The procedure of pitch-based analysis.",
"cite_spans": [],
"ref_spans": [
{
"start": 819,
"end": 827,
"text": "Figure 3",
"ref_id": null
},
{
"start": 1091,
"end": 1099,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Pitch-based Analysis",
"sec_num": "4."
},
{
"text": ") | Pr( 2 \uf04c O ) | Pr( 1 \uf04c O ) | Pr( N \uf04c O i \uf04c 1 \uf04c 2 \uf04c N \uf04c",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pitch-based Analysis",
"sec_num": "4."
},
{
"text": "Let e m , 1\uf0a3 m \uf0a3 M, be the inventory of possible notes produced by a bird. Our aim is to determine which among the M possible notes is most likely produced at each instant in a bird sound recording. We apply the strategy in (Yu et al., 2008) to solve this problem. First, the bird sound is divided into frames using a P-length sliding Hamming window, with 0.5P-length overlapping between frames. Every frame then undergoes a Fast Fourier Transform (FFT) with size J. Let x t,j denote the signal's energy with respect to FFT index j in frame t, where 1 \uf0a3 j\uf0a3 J, and x t,j has been normalized to the range between 0 and 1. Then, the signal's energy on the m-th note in frame t can be estimated by:",
"cite_spans": [
{
"start": 224,
"end": 241,
"text": "(Yu et al., 2008)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "MIDI Note Extraction",
"sec_num": "4.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": ", , , ( ) max m t m t j j U j e x x \uf022 \uf03d \uf03d ,",
"eq_num": "(4)"
}
],
"section": "MIDI Note Extraction",
"sec_num": "4.1"
},
{
"text": "and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MIDI Note Extraction",
"sec_num": "4.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\uf028 \uf029 ( ) 2 440 ( ) 12 log 69.5 F j U j \uf0ea \uf0fa \uf03d \uf0d7 \uf02b \uf0ea \uf0fa \uf0eb \uf0fb ,",
"eq_num": "(5)"
}
],
"section": "MIDI Note Extraction",
"sec_num": "4.1"
},
{
"text": "where \uf0eb \uf0fb is a floor operator, F(j) is the corresponding frequency of FFT index j, and U(\uf0d7) represents a conversion between the FFT indices and the MIDI note numbers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MIDI Note Extraction",
"sec_num": "4.1"
},
{
"text": "Ideally, if note n m is sung in frame t, the resulting energy, ,t m x , should be the maximum",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MIDI Note Extraction",
"sec_num": "4.1"
},
{
"text": "among ,1 ,2 ,\u02c6, , , t t t M x x x \uf04b",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MIDI Note Extraction",
"sec_num": "4.1"
},
{
"text": ". Nevertheless, it is sometimes the case that the energy of a true note is smaller than that of its harmonic note. To avoid the interference of harmonics in the estimation of true notes, we use the strategy of Sub-Harmonic Summation (SHS) (Piszczalski & Galler, 1979) , which computes a value for the \"strength\" of each possible note by summing the signal's energy on a note and its harmonic note numbers. Specifically, the strength of note n m in frame t is computed using",
"cite_spans": [
{
"start": 239,
"end": 267,
"text": "(Piszczalski & Galler, 1979)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "MIDI Note Extraction",
"sec_num": "4.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": ", , 1 2 0\u0108 c t m t m c c y h x \uf02b \uf03d \uf03d \uf0e5 ,",
"eq_num": "(6)"
}
],
"section": "MIDI Note Extraction",
"sec_num": "4.1"
},
{
"text": "where C is the number of harmonics considered, and h is a positive value less than 1 that discounts the contribution of higher harmonics. The result of this summation is that the true note usually receives the largest amount of energy from its harmonic notes. Thus, the true note in frame t can be determined by choosing the note number associated with the largest value of the strength. Nevertheless, recognizing that a note usually lasts several frames, the decision could be made by including the information from neighboring frames. Specifically, we determine the sung note in frame t by choosing the note number associated with the largest value of the strength accumulated for adjacent frames, i.e.,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MIDI Note Extraction",
"sec_num": "4.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": ", 1 arg max W t t b m m M b W o y \uf02b \uf0a3 \uf0a3 \uf03d\uf02d \uf03d \uf0e5 ,",
"eq_num": "(7)"
}
],
"section": "MIDI Note Extraction",
"sec_num": "4.1"
},
{
"text": "Further, the resulting note sequence is refined by taking into account the continuity between frames. This is done with median filtering, which replaces each note with the local median of notes of its neighboring \uf0b1W frames to remove jitters between adjacent frames. In the implementation, the range of e m is set to be 60 \uf0a3 e m \uf0a3 120, corresponding to fundamental frequency from to 261.6 to 8591 Hz.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MIDI Note Extraction",
"sec_num": "4.1"
},
{
"text": "After converting bird sounds into sequences of MIDI notes, we use a bigram model (Huang et al., 2001) to capture the dynamic information in the note sequences. The bigram model consists of a set of bigram probabilities and unigram probabilities. The bigram probabilities Pr(e j |e i ), 1 \uf0a3 i, j \uf0a3 M, account for the frequency of a certain note e i followed by another note e j , while the unigram probabilities Pr(e i ) account for the frequency of occurring a certain note e i . It is assumed that each bird species has its own pitch pattern that reflects in the frequency of occurrence of one or a pair of notes. For N bird species to be identified, we generate N bigram models 1 2 , , , N \uf04c \uf04c \uf04c \uf04b .",
"cite_spans": [
{
"start": 81,
"end": 101,
"text": "(Huang et al., 2001)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Bigram Modeling",
"sec_num": "4.2"
},
{
"text": "In the testing phase, an unknown bird sound recording first is converted into a sequence of notes O = o 1 , o 2 ,\u2026, o T , then tested against each bigram model ,1 i i N \uf04c \uf0a3 \uf0a3 . The results of testing are likelihood probabilities:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Likelihood Computation",
"sec_num": "4.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "1 1 2 Pr( | ) Pr( ) Pr( | ) T t t t o o o \uf02d \uf03d \uf04c \uf03d \uf0d7 \uf0d5 O .",
"eq_num": "(8)"
}
],
"section": "Likelihood Computation",
"sec_num": "4.3"
},
{
"text": "5. Experiments",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Likelihood Computation",
"sec_num": "4.3"
},
{
"text": "The bird sound data used in this study stem from the commercial CDs and websites listed in ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bird Sound Data",
"sec_num": "5.1"
},
{
"text": "Our experiments were conducted to examine the timbre-based component and pitch-based component separately before evaluating if the performance of bird sound identification could be further improved by combining the two components. The performance was characterized with the accuracy:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment Results",
"sec_num": "5.2"
},
{
"text": "Tonal number of correctly -identified recordings Accuracy (in%)= 100% Tonal number of testing recordings \uf0b4",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment Results",
"sec_num": "5.2"
},
{
"text": "In the timbre-based analysis, the MFCC feature vectors, each consisting of 20 coefficients, were extracted from the bird sound data, using a 30-ms Hamming-windowed frame with 15-ms frame shifts. The FFT size was set to be 2048. Table 3 shows the identification accuracies obtained with various numbers of mixture Gaussian densities used in GMM. The best accuracy in Table 3 is 71.1%, achieved with 64 mixtures. Table 4 shows the confusion matrix of the identification for the case of 64 mixtures. We can see from Table 4 that the timbre-based analysis performs best in identifying Pomatorhinus ruficollis, whereas it performs worst in identifying Dendrocitta formosae. ",
"cite_spans": [],
"ref_spans": [
{
"start": 228,
"end": 235,
"text": "Table 3",
"ref_id": null
},
{
"start": 366,
"end": 373,
"text": "Table 3",
"ref_id": null
},
{
"start": 411,
"end": 418,
"text": "Table 4",
"ref_id": null
},
{
"start": 513,
"end": 520,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Accuracies Obtained with the Timbre-based Analysis",
"sec_num": "5.2.1"
},
{
"text": "We then tested the pitch-based analysis component. The length of frame and FFT size were the same as the settings in computing MFCCs. Table 5 shows the resulting confusion matrix of the identification. We obtained an average identification accuracy of 72.0%, which is slightly higher than that obtained with the timbre-based analysis. Comparing Tables 4 and 5, we can see that the misidentified cases for timbre-based analysis and pitch-based analysis are different. This indicates that combined use of the two components would achieve higher identification accuracy than the use of an individual component.",
"cite_spans": [],
"ref_spans": [
{
"start": 134,
"end": 141,
"text": "Table 5",
"ref_id": "TABREF2"
},
{
"start": 335,
"end": 360,
"text": "Comparing Tables 4 and 5,",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Accuracies Obtained with the Pitch-based Analysis",
"sec_num": "5.2.2"
},
{
"text": "Finally, we examined the proposed system based on the combination of timbre-based analysis and pitch-based analysis. Table 6 shows the identification accuracies obtained with different settings in the value of \u03b1 and \u03b2. We can see from Table 6 that the combined use of timbre-based analysis and pitch-based analysis does perform better than both timbre-based analysis and pitch-based analysis used solely. It also can be seen that the resulting accuracies are not sensitive to the values of \u03b1 and \u03b2, as long as they are set to a certain range. Table 7 shows the confusion matrix of the identification for the case of \u03b1 = 0.4 and \u03b2 = 0.6, which achieves an average accuracy of 75.0%. We can see from Table 7 that the overall system improves the accuracies of identifying almost every bird species, compared to Tables 4 and 5 . This result confirms the validity of the proposed system. ",
"cite_spans": [],
"ref_spans": [
{
"start": 117,
"end": 124,
"text": "Table 6",
"ref_id": null
},
{
"start": 235,
"end": 242,
"text": "Table 6",
"ref_id": null
},
{
"start": 543,
"end": 550,
"text": "Table 7",
"ref_id": "TABREF3"
},
{
"start": 698,
"end": 705,
"text": "Table 7",
"ref_id": "TABREF3"
},
{
"start": 808,
"end": 822,
"text": "Tables 4 and 5",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Combined Use of the Timbre-based and Pitch-based Analyses",
"sec_num": "5.2.3"
},
{
"text": "- - - - - - - 0.8 - 74.0 - - - - - - - 0.7 - - 74.1 - - - - - - 0.6 - - - 74.6 - - - - - 0.5 - - - - 74.9 - - - - 0.4 - - - - - 75 - - - 0.3 - - - - - - 74.8 - - 0.2 - - - - - - - 74.7 - 0.1 - - - - - - - - 73.1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Combined Use of the Timbre-based and Pitch-based Analyses",
"sec_num": "5.2.3"
},
{
"text": "This work has developed an automatic bird sound identification system, with the motivation of helping people learn to identify bird species from their sounds. The system is built on speech recognition techniques, along with specific tailoring to handle the bird sound characteristics. Two acoustic cues were investigated for analysis, timbre and pitch. In the timbre-based analysis, we used MFCCs to characterize the bird sound. Then, GMMs were used to represent the MFCCs as a set of parameters. In the pitch-based analysis, we converted bird sounds from their waveform representations into a sequence of MIDI notes. Then, Bigram models were used to capture the dynamic change information of the notes. Our experiments, conducted using audio data of the ten most common bird species in the Taipei urban area, show that the timbre-based, pitch-based, and the combined system achieves 71.1%, 72.1%, and 75.0% accuracy of bird sound identification, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6."
},
{
"text": "Despite the potential, the performance of the proposed bird sound identification system still leaves considerable room for improvement. In the future, we will try to include more characteristics of bird sounds, such as the concept of bird calls and bird songs, into our system design. In addition, we have to scale up our sound database to hundreds or thousands of bird",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6."
}
],
"back_matter": [
{
"text": "This work was supported in part by the National Science Council, Taiwan, under Grant No. NSC 99-2628-E-027-005. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgement",
"sec_num": null
},
{
"text": "species to validate the proposed identification system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "annex",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Template-based automatic recognition of birdsong syllables from continuous recordings",
"authors": [
{
"first": "S",
"middle": [
"E"
],
"last": "Anderson",
"suffix": ""
},
{
"first": "A",
"middle": [
"S"
],
"last": "Dave",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Margoliash",
"suffix": ""
}
],
"year": 1996,
"venue": "J. Acoust. Soc. Amer",
"volume": "100",
"issue": "2",
"pages": "1209--1219",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anderson, S. E., Dave, A. S., & Margoliash, D. (1996) .Template-based automatic recognition of birdsong syllables from continuous recordings. J. Acoust. Soc. Amer., 100(2), 1209-1219.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Comparison of Parametric Representations for Monosyllabic Word Recognition in Continuously Spoken Sentences",
"authors": [
{
"first": "S",
"middle": [
"B"
],
"last": "Davis",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Mermelstein",
"suffix": ""
}
],
"year": 1980,
"venue": "IEEE Trans. on Acoustic, Speech and Signal Processing",
"volume": "28",
"issue": "4",
"pages": "357--366",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Davis, S. B., & Mermelstein, P. (1980). Comparison of Parametric Representations for Monosyllabic Word Recognition in Continuously Spoken Sentences. IEEE Trans. on Acoustic, Speech and Signal Processing., 28(4), 357-366.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Maximum likelihood from incomplete data via the EM algorithm",
"authors": [
{
"first": "A",
"middle": [],
"last": "Dempster",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Laird",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Rubin",
"suffix": ""
}
],
"year": 1977,
"venue": "J. R. Statist. Soc",
"volume": "39",
"issue": "",
"pages": "1--38",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dempster, A., Laird, N., & Rubin, D. (1977). Maximum likelihood from incomplete data via the EM algorithm. J. R. Statist. Soc., 39, 1-38.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Spoken Language Processing",
"authors": [
{
"first": "X",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Acero",
"suffix": ""
},
{
"first": "H",
"middle": [
"W"
],
"last": "Hon",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Huang, X., Acero, A., & Hon, H. W. (2001). Spoken Language Processing, Prentice Hall.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Automated recognition of bird song elements from continuous recordings using dynamic time warping and hidden Markov models: A comparative study",
"authors": [
{
"first": "J",
"middle": [],
"last": "Kogan",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Margoliash",
"suffix": ""
}
],
"year": 1998,
"venue": "J. Acoust. Soc. Amer",
"volume": "103",
"issue": "4",
"pages": "2187--2196",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kogan, J., & Margoliash, D. (1998). Automated recognition of bird song elements from continuous recordings using dynamic time warping and hidden Markov models: A comparative study. J. Acoust. Soc. Amer., 103(4), 2187-2196.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Birdsong recognition using backpropagation and multivariate statistics",
"authors": [
{
"first": "A",
"middle": [
"L"
],
"last": "Mcilraith",
"suffix": ""
},
{
"first": "H",
"middle": [
"C"
],
"last": "Card",
"suffix": ""
}
],
"year": 1997,
"venue": "IEEE Trans. Signal Process",
"volume": "45",
"issue": "11",
"pages": "2740--2748",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "McIlraith, A. L., & Card, H. C. (1997). Birdsong recognition using backpropagation and multivariate statistics. IEEE Trans. Signal Process., 45(11), 2740-2748.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Predicting musical pitch from component frequency ratios",
"authors": [
{
"first": "M",
"middle": [],
"last": "Piszczalski",
"suffix": ""
},
{
"first": "B",
"middle": [
"A"
],
"last": "Galler",
"suffix": ""
}
],
"year": 1979,
"venue": "Journal of the Acoustical Society of America",
"volume": "66",
"issue": "3",
"pages": "710--720",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Piszczalski, M., & Galler, B. A. (1979). Predicting musical pitch from component frequency ratios. Journal of the Acoustical Society of America, 66(3), 710-720.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "A tutorial on Hidden Markov Models and selected applications in speech recognition",
"authors": [
{
"first": "L",
"middle": [
"R"
],
"last": "Rabiner",
"suffix": ""
}
],
"year": 1989,
"venue": "Proceedings of the IEEE",
"volume": "77",
"issue": "2",
"pages": "257--286",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rabiner, L. R. (1989). A tutorial on Hidden Markov Models and selected applications in speech recognition. Proceedings of the IEEE, 77(2), 257-286.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Robust text-independent speaker identification using Gaussian mixture speaker models",
"authors": [
{
"first": "D",
"middle": [],
"last": "Reynolds",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Rose",
"suffix": ""
}
],
"year": 1995,
"venue": "IEEE Trans. Speech Audio Process",
"volume": "3",
"issue": "1",
"pages": "72--83",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Reynolds, D., & Rose, R. (1995). Robust text-independent speaker identification using Gaussian mixture speaker models. IEEE Trans. Speech Audio Process., 3(1), 72-83.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Speaker Verification Using Adapted Gaussian ixture Models",
"authors": [
{
"first": "D",
"middle": [],
"last": "Reynolds",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Quatieri",
"suffix": ""
}
],
"year": 2000,
"venue": "Digital Signal Processing",
"volume": "10",
"issue": "",
"pages": "19--41",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Reynolds, D., & Quatieri, T. (2000). Speaker Verification Using Adapted Gaussian ixture Models. Digital Signal Processing, 10, 19-41.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Parametric representations of bird sounds for automatic species recognition",
"authors": [
{
"first": "P",
"middle": [],
"last": "Somervuo",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "H\u00e4rm\u00e4",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Fagerlund",
"suffix": ""
}
],
"year": 2006,
"venue": "IEEE Trans. Audio, Speech, Language Process",
"volume": "14",
"issue": "6",
"pages": "2252--2263",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Somervuo, P., H\u00e4rm\u00e4, A., & Fagerlund, S. (2006). Parametric representations of bird sounds for automatic species recognition. IEEE Trans. Audio, Speech, Language Process., 14(6), 2252-2263.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "A query-by-Singing system for retrieving karaoke music",
"authors": [
{
"first": "H",
"middle": [
"M"
],
"last": "Yu",
"suffix": ""
},
{
"first": "W",
"middle": [
"H"
],
"last": "Tsai",
"suffix": ""
},
{
"first": "H",
"middle": [
"M"
],
"last": "Wang",
"suffix": ""
}
],
"year": 2008,
"venue": "IEEE Trans. Multimedia",
"volume": "10",
"issue": "8",
"pages": "1626--1637",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yu, H. M., Tsai, W. H., & Wang, H. M. (2008). A query-by-Singing system for retrieving karaoke music. IEEE Trans. Multimedia, 10(8), 1626-1637.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "The proposed bird sound identification system.",
"num": null,
"uris": null,
"type_str": "figure"
},
"TABREF0": {
"num": null,
"text": "To facilitate the experiments, all of the sound data were converted into PCM WAV with 22.05-kHz sampling rate and 16-bit quantization resolution. We chose ten bird species commonly seen in the Taipei urban area, including The data were divided into two subsets, training and testing. The amount of sound data with respect to each bird species is listed inTable 2.",
"type_str": "table",
"content": "<table><tr><td>Dicrurus aeneus, Dendrocopos</td></tr></table>",
"html": null
},
"TABREF1": {
"num": null,
"text": "",
"type_str": "table",
"content": "<table><tr><td># mixtures</td><td>4</td><td>8</td><td>16</td><td>32</td><td>64</td><td>128</td></tr><tr><td>Dicrurus aeneus</td><td>55.8</td><td>58.4</td><td>58.4</td><td>59.7</td><td>64.9</td><td>62.3</td></tr><tr><td>Dendrocopos canicapillus</td><td>59.8</td><td>59.8</td><td>60.8</td><td>62.7</td><td>62.7</td><td>62.7</td></tr><tr><td>Pomatorhinus ruficollis</td><td>80</td><td>81.9</td><td>83.2</td><td>83.2</td><td>82.6</td><td>81.9</td></tr><tr><td>Stachyris ruficeps</td><td>69.1</td><td>69.1</td><td>70.4</td><td>71.6</td><td>70.4</td><td>69.1</td></tr><tr><td>Megalaima oorti</td><td>72.1</td><td>73.1</td><td>74</td><td>74.4</td><td>74.9</td><td>74.9</td></tr><tr><td>Heterophasia auricularis</td><td>74.4</td><td>75.6</td><td>78</td><td>77.9</td><td>76.7</td><td>76.7</td></tr><tr><td>Hypsipetes madagascariensis</td><td>62.4</td><td>63.7</td><td>65</td><td>66.2</td><td>68.8</td><td>67.5</td></tr><tr><td>Myiophonus insularis</td><td>68</td><td>72</td><td>76</td><td>76</td><td>76</td><td>76</td></tr><tr><td>Otus spilocephalus</td><td>64</td><td>64</td><td>64</td><td>64</td><td>67.6</td><td>65.8</td></tr><tr><td>Dendrocitta formosae</td><td>50.7</td><td>52.1</td><td>56.2</td><td>57.5</td><td>56.2</td><td>54.8</td></tr><tr><td>Average Accuracy</td><td>67.1</td><td>68.2</td><td>69.5</td><td>70.3</td><td>71.1</td><td>70.3</td></tr></table>",
"html": null
},
"TABREF2": {
"num": null,
"text": "",
"type_str": "table",
"content": "<table><tr><td>True</td><td/><td colspan=\"2\">Identified</td><td>Dicrurus aeneus</td><td>Dendrocopos</td><td>canicapillus</td><td>Pomatorhinus</td><td>ruficollis</td><td>Stachyris ruficeps</td><td>Megalaima oorti</td><td>Heterophasia</td><td>auricularis</td><td>Hypsipetes</td><td>madagascariensis</td><td>Myiophonus</td><td>insularis</td><td>Otus</td><td>spilocephalus</td><td>Dendrocitta</td><td>formosae</td></tr><tr><td colspan=\"3\">Dicrurus aeneus</td><td/><td>61</td><td colspan=\"2\">19.9</td><td colspan=\"2\">0</td><td>10.4</td><td>0</td><td colspan=\"2\">5.2</td><td colspan=\"2\">3.9</td><td colspan=\"2\">0</td><td colspan=\"2\">0</td><td>0</td></tr><tr><td colspan=\"3\">Dendrocopos canicapillus</td><td/><td colspan=\"5\">2.9 71.6 12.7</td><td>0</td><td>2</td><td colspan=\"2\">0</td><td colspan=\"2\">3.9</td><td colspan=\"2\">0</td><td colspan=\"2\">2.9</td><td>3.9</td></tr><tr><td colspan=\"3\">Pomatorhinus ruficollis</td><td/><td>0</td><td colspan=\"4\">7.7 82.9</td><td>0</td><td>0</td><td colspan=\"2\">0</td><td colspan=\"2\">0</td><td colspan=\"2\">7.7</td><td colspan=\"2\">0</td><td>1.9</td></tr><tr><td colspan=\"3\">Stachyris ruficeps</td><td/><td>1.2</td><td colspan=\"2\">0</td><td colspan=\"4\">7.4 75.3 2.5</td><td colspan=\"2\">1.2</td><td colspan=\"2\">0</td><td colspan=\"2\">0</td><td colspan=\"2\">0</td><td>0</td></tr><tr><td colspan=\"3\">Megalaima oorti</td><td/><td>0</td><td colspan=\"2\">1.4</td><td colspan=\"2\">0</td><td colspan=\"2\">1.8 82.2</td><td colspan=\"2\">0</td><td colspan=\"2\">11.4</td><td colspan=\"2\">0</td><td colspan=\"2\">2.7</td><td>0.5</td></tr><tr><td colspan=\"3\">Heterophasia auricularis</td><td/><td>0</td><td colspan=\"2\">0</td><td colspan=\"2\">11.6</td><td>0</td><td>0</td><td colspan=\"4\">76.7 5.8</td><td colspan=\"2\">5.8</td><td colspan=\"2\">0</td><td>0</td></tr><tr><td colspan=\"3\">Hypsipetes madagascariensis</td><td/><td>0</td><td colspan=\"2\">6.4</td><td colspan=\"2\">0</td><td>2.5</td><td>9.6</td><td colspan=\"2\">0</td><td colspan=\"6\">63.1 2.5 15.9</td><td>0</td></tr><tr><td colspan=\"3\">Myiophonus insularis</td><td/><td>0</td><td colspan=\"2\">0</td><td colspan=\"2\">4</td><td>0</td><td>12</td><td colspan=\"2\">28</td><td colspan=\"2\">0</td><td colspan=\"2\">56</td><td colspan=\"2\">0</td><td>0</td></tr><tr><td colspan=\"3\">Otus spilocephalus</td><td/><td colspan=\"5\">7.2 21.6 5.4</td><td>0</td><td>0</td><td colspan=\"2\">0.1</td><td colspan=\"2\">0</td><td colspan=\"2\">0</td><td colspan=\"2\">64.9</td><td>0</td></tr><tr><td colspan=\"3\">Dendrocitta formosae</td><td/><td>5.4</td><td colspan=\"2\">0</td><td colspan=\"2\">8.2</td><td>0</td><td>0</td><td colspan=\"2\">0</td><td colspan=\"2\">24.7</td><td colspan=\"2\">0</td><td colspan=\"3\">2.7 58.9</td></tr><tr><td>\u03b1</td><td>\u03b2</td><td>0.1</td><td>0.2</td><td/><td>0.3</td><td/><td colspan=\"2\">0.4</td><td>0.5</td><td>0.6</td><td/><td/><td>0.7</td><td/><td colspan=\"2\">0.8</td><td colspan=\"2\">0.9</td></tr><tr><td>0.9</td><td/><td>73.3</td><td>-</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr></table>",
"html": null
},
"TABREF3": {
"num": null,
"text": "",
"type_str": "table",
"content": "<table><tr><td>True</td><td>Identified</td><td>Dicrurus aeneus</td><td>Dendrocopos</td><td>canicapillus</td><td>Pomatorhinus</td><td>ruficollis</td><td>Stachyris ruficeps</td><td>Megalaima oorti</td><td>Heterophasia</td><td>auricularis</td><td>Hypsipetes</td><td>madagascariensis</td><td>Myiophonus</td><td>insularis</td><td>Otus</td><td>spilocephalus</td><td>Dendrocitta</td><td>formosae</td></tr><tr><td/><td>Dicrurus aeneus</td><td>67.5</td><td colspan=\"2\">13</td><td colspan=\"2\">0</td><td>9.1</td><td>0</td><td colspan=\"2\">5.2</td><td colspan=\"2\">2.6</td><td colspan=\"2\">0</td><td colspan=\"2\">2.6</td><td colspan=\"2\">0</td></tr><tr><td colspan=\"7\">Dendrocopos canicapillus 2.9 75.5 9.8</td><td>0</td><td>0.1</td><td colspan=\"2\">0</td><td colspan=\"2\">3.9</td><td colspan=\"2\">0</td><td colspan=\"2\">2.9</td><td colspan=\"2\">3.9</td></tr><tr><td colspan=\"2\">Pomatorhinus ruficollis</td><td>0</td><td colspan=\"4\">5.8 85.2</td><td>0</td><td>0</td><td colspan=\"2\">0</td><td colspan=\"2\">0</td><td colspan=\"2\">5.8</td><td colspan=\"2\">0</td><td colspan=\"2\">3.2</td></tr><tr><td colspan=\"2\">Stachyris ruficeps</td><td>1.2</td><td colspan=\"2\">0</td><td colspan=\"4\">6.2 75.3 2.5</td><td colspan=\"2\">1.2</td><td colspan=\"2\">0</td><td colspan=\"2\">1.2</td><td colspan=\"2\">0</td><td colspan=\"2\">0</td></tr><tr><td/><td>Megalaima oorti</td><td>0</td><td colspan=\"2\">1.4</td><td colspan=\"2\">0</td><td colspan=\"2\">1.8 83.1</td><td colspan=\"2\">0</td><td colspan=\"2\">9.1</td><td colspan=\"2\">0</td><td colspan=\"2\">3.2</td><td colspan=\"2\">1.4</td></tr><tr><td colspan=\"2\">Heterophasia auricularis</td><td>0</td><td colspan=\"2\">0</td><td colspan=\"2\">10.5</td><td>0</td><td>0</td><td colspan=\"4\">80.2 4.7</td><td colspan=\"2\">3.5</td><td colspan=\"2\">0</td><td colspan=\"2\">0</td></tr><tr><td colspan=\"2\">Hypsipetes madagascariensis</td><td>0</td><td colspan=\"2\">6.4</td><td colspan=\"2\">0</td><td>1.3</td><td>9.6</td><td colspan=\"2\">0</td><td colspan=\"6\">65.6 1.3 15.9</td><td colspan=\"2\">0</td></tr><tr><td colspan=\"2\">Myiophonus insularis</td><td>0</td><td colspan=\"2\">0</td><td colspan=\"2\">4</td><td>0</td><td>4</td><td colspan=\"2\">12</td><td colspan=\"2\">0</td><td colspan=\"2\">80</td><td colspan=\"2\">0</td><td colspan=\"2\">0</td></tr><tr><td colspan=\"2\">Otus spilocephalus</td><td colspan=\"5\">7.2 21.6 5.4</td><td>0</td><td>0</td><td colspan=\"2\">1.8</td><td colspan=\"2\">0</td><td colspan=\"2\">0</td><td colspan=\"2\">64</td><td colspan=\"2\">0</td></tr><tr><td colspan=\"2\">Dendrocitta formosae</td><td>4.1</td><td colspan=\"2\">0</td><td colspan=\"2\">5.5</td><td>0</td><td>0</td><td colspan=\"2\">0</td><td colspan=\"2\">21.9</td><td colspan=\"2\">0</td><td colspan=\"4\">2.7 65.8</td></tr></table>",
"html": null
}
}
}
}