Spaces:
Runtime error
Runtime error
EduardoPacheco
commited on
Commit
β’
02e56fa
1
Parent(s):
fe858e7
Add necessary implementation
Browse files
README.md
CHANGED
@@ -5,7 +5,7 @@ datasets:
|
|
5 |
tags:
|
6 |
- evaluate
|
7 |
- metric
|
8 |
-
description: "
|
9 |
sdk: gradio
|
10 |
sdk_version: 3.19.1
|
11 |
app_file: app.py
|
@@ -14,37 +14,79 @@ pinned: false
|
|
14 |
|
15 |
# Metric Card for ArgWER
|
16 |
|
17 |
-
***Module Card Instructions:*** *Fill out the following subsections. Feel free to take a look at existing metric cards if you'd like examples.*
|
18 |
-
|
19 |
## Metric Description
|
20 |
-
|
21 |
|
22 |
## How to Use
|
23 |
-
|
24 |
|
25 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
26 |
|
27 |
### Inputs
|
28 |
-
*List
|
29 |
-
- **
|
|
|
30 |
|
31 |
### Output Values
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
32 |
|
33 |
-
|
34 |
-
|
35 |
-
|
|
|
36 |
|
37 |
#### Values from Popular Papers
|
38 |
-
|
|
|
|
|
39 |
|
40 |
### Examples
|
41 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
42 |
|
43 |
## Limitations and Bias
|
44 |
-
|
|
|
|
|
|
|
45 |
|
46 |
## Citation
|
47 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
48 |
|
49 |
## Further References
|
50 |
-
|
|
|
|
5 |
tags:
|
6 |
- evaluate
|
7 |
- metric
|
8 |
+
description: "Word Error Rate (WER) metric with detailed error analysis capabilities for speech recognition evaluation"
|
9 |
sdk: gradio
|
10 |
sdk_version: 3.19.1
|
11 |
app_file: app.py
|
|
|
14 |
|
15 |
# Metric Card for ArgWER
|
16 |
|
|
|
|
|
17 |
## Metric Description
|
18 |
+
ArgWER is an enhanced version of the Word Error Rate (WER) metric used for evaluating speech recognition systems. While it calculates the standard WER score, it also provides detailed information about different types of errors (insertions, deletions, and substitutions) when requested. This makes it particularly useful for detailed analysis of speech recognition system performance.
|
19 |
|
20 |
## How to Use
|
21 |
+
The metric can be loaded and used through the `evaluate` library:
|
22 |
|
23 |
+
```python
|
24 |
+
import evaluate
|
25 |
+
wer = evaluate.load("EduardoPacheco/argwer")
|
26 |
+
predictions = ["this is the prediction", "there is an other sample"]
|
27 |
+
references = ["this is the reference", "there is another one"]
|
28 |
+
wer_score = wer.compute(predictions=predictions, references=references)
|
29 |
+
```
|
30 |
|
31 |
### Inputs
|
32 |
+
- **predictions** *(List[str])*: List of transcriptions to score from the speech recognition system.
|
33 |
+
- **references** *(List[str])*: List of reference transcriptions for each speech input.
|
34 |
+
- **detailed** *(bool, optional)*: Whether to return detailed error analysis. Defaults to False.
|
35 |
|
36 |
### Output Values
|
37 |
+
The metric returns either a float value representing the WER score, or when `detailed=True`, a dictionary containing:
|
38 |
+
- `wer`: Overall word error rate
|
39 |
+
- `substitution_rate`: Rate of word substitutions
|
40 |
+
- `deletion_rate`: Rate of word deletions
|
41 |
+
- `insertion_rate`: Rate of word insertions
|
42 |
+
- `num_substitutions`: Absolute number of substitutions
|
43 |
+
- `num_deletions`: Absolute number of deletions
|
44 |
+
- `num_insertions`: Absolute number of insertions
|
45 |
+
- `num_hits`: Number of correct words
|
46 |
|
47 |
+
The WER score ranges from 0 to infinity, where:
|
48 |
+
- 0 represents perfect transcription
|
49 |
+
- Lower scores are better
|
50 |
+
- Scores above 1 are possible due to insertions
|
51 |
|
52 |
#### Values from Popular Papers
|
53 |
+
Word Error Rate is a standard metric in speech recognition. For example:
|
54 |
+
- Modern speech recognition systems typically achieve WER scores between 0.02 (2%) to 0.15 (15%) on clean speech.
|
55 |
+
- The exact values vary significantly based on factors like audio quality, accent, and background noise.
|
56 |
|
57 |
### Examples
|
58 |
+
Basic usage:
|
59 |
+
```python
|
60 |
+
predictions = ["this is the prediction", "there is an other sample"]
|
61 |
+
references = ["this is the reference", "there is another one"]
|
62 |
+
wer = evaluate.load("EduardoPacheco/argwer")
|
63 |
+
|
64 |
+
# Basic WER score
|
65 |
+
wer_score = wer.compute(predictions=predictions, references=references)
|
66 |
+
# Returns: 0.5
|
67 |
+
|
68 |
+
# Detailed analysis
|
69 |
+
detailed_scores = wer.compute(predictions=predictions, references=references, detailed=True)
|
70 |
+
# Returns dictionary with detailed error analysis
|
71 |
+
```
|
72 |
|
73 |
## Limitations and Bias
|
74 |
+
- The metric treats all words equally, regardless of their importance in the sentence
|
75 |
+
- It doesn't account for semantic similarity (e.g., synonyms are counted as errors)
|
76 |
+
- The metric is sensitive to word order, which might not always reflect the actual quality of the transcription
|
77 |
+
- Punctuation and capitalization can affect the scores if not properly normalized
|
78 |
|
79 |
## Citation
|
80 |
+
```bibtex
|
81 |
+
@inproceedings{inproceedings,
|
82 |
+
author = {Morris, Andrew and Maier, Viktoria and Green, Phil},
|
83 |
+
year = {2004},
|
84 |
+
month = {01},
|
85 |
+
pages = {},
|
86 |
+
title = {From WER and RIL to MER and WIL: improved evaluation measures for connected speech recognition.}
|
87 |
+
}
|
88 |
+
```
|
89 |
|
90 |
## Further References
|
91 |
+
- [Word Error Rate on Wikipedia](https://en.wikipedia.org/wiki/Word_error_rate)
|
92 |
+
- [JiWER Library](https://github.com/jitsi/jiwer/) - The underlying implementation used by this metric
|
argwer.py
CHANGED
@@ -11,85 +11,117 @@
|
|
11 |
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
12 |
# See the License for the specific language governing permissions and
|
13 |
# limitations under the License.
|
14 |
-
"""
|
15 |
|
16 |
import evaluate
|
17 |
import datasets
|
|
|
18 |
|
19 |
|
20 |
-
# TODO: Add BibTeX citation
|
21 |
_CITATION = """\
|
22 |
-
@
|
23 |
-
|
24 |
-
|
25 |
-
|
|
|
|
|
26 |
}
|
27 |
"""
|
28 |
|
29 |
-
# TODO: Add description of the module here
|
30 |
_DESCRIPTION = """\
|
31 |
-
|
32 |
-
|
|
|
|
|
|
|
|
|
|
|
33 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
34 |
|
35 |
-
# TODO: Add description of the arguments of the module here
|
36 |
_KWARGS_DESCRIPTION = """
|
37 |
-
|
|
|
38 |
Args:
|
39 |
-
|
40 |
-
|
41 |
-
|
42 |
-
|
43 |
Returns:
|
44 |
-
|
45 |
-
|
46 |
Examples:
|
47 |
-
Examples should be written in doctest format, and should illustrate how
|
48 |
-
to use the function.
|
49 |
|
50 |
-
>>>
|
51 |
-
>>>
|
52 |
-
>>>
|
53 |
-
|
|
|
|
|
54 |
"""
|
55 |
|
56 |
-
# TODO: Define external resources urls if needed
|
57 |
-
BAD_WORDS_URL = "http://url/to/external/resource/bad_words.txt"
|
58 |
-
|
59 |
-
|
60 |
@evaluate.utils.file_utils.add_start_docstrings(_DESCRIPTION, _KWARGS_DESCRIPTION)
|
61 |
class ArgWER(evaluate.Metric):
|
62 |
"""TODO: Short description of my evaluation module."""
|
63 |
|
64 |
def _info(self):
|
65 |
-
# TODO: Specifies the evaluate.EvaluationModuleInfo object
|
66 |
return evaluate.MetricInfo(
|
67 |
-
# This is the description that will appear on the modules page.
|
68 |
-
module_type="metric",
|
69 |
description=_DESCRIPTION,
|
70 |
citation=_CITATION,
|
71 |
inputs_description=_KWARGS_DESCRIPTION,
|
72 |
-
|
73 |
-
|
74 |
-
|
75 |
-
|
76 |
-
|
77 |
-
|
78 |
-
|
79 |
-
|
80 |
-
|
81 |
-
|
82 |
)
|
83 |
|
84 |
def _download_and_prepare(self, dl_manager):
|
85 |
"""Optional: download external resources useful to compute the scores"""
|
86 |
-
# TODO: Download external resources if needed
|
87 |
pass
|
88 |
|
89 |
-
def _compute(self, predictions, references):
|
90 |
"""Returns the scores"""
|
91 |
-
|
92 |
-
|
93 |
-
|
94 |
-
|
95 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
11 |
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
12 |
# See the License for the specific language governing permissions and
|
13 |
# limitations under the License.
|
14 |
+
"""This is the same as WER, but it also returns detailed information about the erros (insertions, deletions, substitutions)"""
|
15 |
|
16 |
import evaluate
|
17 |
import datasets
|
18 |
+
from jiwer import compute_measures
|
19 |
|
20 |
|
|
|
21 |
_CITATION = """\
|
22 |
+
@inproceedings{inproceedings,
|
23 |
+
author = {Morris, Andrew and Maier, Viktoria and Green, Phil},
|
24 |
+
year = {2004},
|
25 |
+
month = {01},
|
26 |
+
pages = {},
|
27 |
+
title = {From WER and RIL to MER and WIL: improved evaluation measures for connected speech recognition.}
|
28 |
}
|
29 |
"""
|
30 |
|
|
|
31 |
_DESCRIPTION = """\
|
32 |
+
Word error rate (WER) is a common metric of the performance of an automatic speech recognition system.
|
33 |
+
|
34 |
+
The general difficulty of measuring performance lies in the fact that the recognized word sequence can have a different length from the reference word sequence (supposedly the correct one). The WER is derived from the Levenshtein distance, working at the word level instead of the phoneme level. The WER is a valuable tool for comparing different systems as well as for evaluating improvements within one system. This kind of measurement, however, provides no details on the nature of translation errors and further work is therefore required to identify the main source(s) of error and to focus any research effort.
|
35 |
+
|
36 |
+
This problem is solved by first aligning the recognized word sequence with the reference (spoken) word sequence using dynamic string alignment. Examination of this issue is seen through a theory called the power law that states the correlation between perplexity and word error rate.
|
37 |
+
|
38 |
+
Word error rate can then be computed as:
|
39 |
|
40 |
+
WER = (S + D + I) / N = (S + D + I) / (S + D + C)
|
41 |
+
|
42 |
+
where
|
43 |
+
|
44 |
+
S is the number of substitutions,
|
45 |
+
D is the number of deletions,
|
46 |
+
I is the number of insertions,
|
47 |
+
C is the number of correct words,
|
48 |
+
N is the number of words in the reference (N=S+D+C).
|
49 |
+
|
50 |
+
This value indicates the average number of errors per reference word. The lower the value, the better the
|
51 |
+
performance of the ASR system with a WER of 0 being a perfect score.
|
52 |
+
"""
|
53 |
|
|
|
54 |
_KWARGS_DESCRIPTION = """
|
55 |
+
Compute WER score of transcribed segments against references.
|
56 |
+
|
57 |
Args:
|
58 |
+
references: List of references for each speech input.
|
59 |
+
predictions: List of transcriptions to score.
|
60 |
+
detailed (bool, default=False): Whether to also return normalized substitutions, deletions and insertions.
|
61 |
+
|
62 |
Returns:
|
63 |
+
(float): the word error rate
|
64 |
+
|
65 |
Examples:
|
|
|
|
|
66 |
|
67 |
+
>>> predictions = ["this is the prediction", "there is an other sample"]
|
68 |
+
>>> references = ["this is the reference", "there is another one"]
|
69 |
+
>>> wer = evaluate.load("EduardoPacheco/argwer")
|
70 |
+
>>> wer_score = wer.compute(predictions=predictions, references=references)
|
71 |
+
>>> print(wer_score)
|
72 |
+
0.5
|
73 |
"""
|
74 |
|
|
|
|
|
|
|
|
|
75 |
@evaluate.utils.file_utils.add_start_docstrings(_DESCRIPTION, _KWARGS_DESCRIPTION)
|
76 |
class ArgWER(evaluate.Metric):
|
77 |
"""TODO: Short description of my evaluation module."""
|
78 |
|
79 |
def _info(self):
|
|
|
80 |
return evaluate.MetricInfo(
|
|
|
|
|
81 |
description=_DESCRIPTION,
|
82 |
citation=_CITATION,
|
83 |
inputs_description=_KWARGS_DESCRIPTION,
|
84 |
+
features=datasets.Features(
|
85 |
+
{
|
86 |
+
"predictions": datasets.Value("string", id="sequence"),
|
87 |
+
"references": datasets.Value("string", id="sequence"),
|
88 |
+
}
|
89 |
+
),
|
90 |
+
codebase_urls=["https://github.com/jitsi/jiwer/"],
|
91 |
+
reference_urls=[
|
92 |
+
"https://en.wikipedia.org/wiki/Word_error_rate",
|
93 |
+
],
|
94 |
)
|
95 |
|
96 |
def _download_and_prepare(self, dl_manager):
|
97 |
"""Optional: download external resources useful to compute the scores"""
|
|
|
98 |
pass
|
99 |
|
100 |
+
def _compute(self, predictions, references, detailed=False) -> float | dict[str, float]:
|
101 |
"""Returns the scores"""
|
102 |
+
num_substitutions = 0
|
103 |
+
num_deletions = 0
|
104 |
+
num_insertions = 0
|
105 |
+
num_hits = 0
|
106 |
+
for prediction, reference in zip(predictions, references):
|
107 |
+
measures = compute_measures(reference, prediction)
|
108 |
+
num_substitutions += measures["substitutions"]
|
109 |
+
num_deletions += measures["deletions"]
|
110 |
+
num_insertions += measures["insertions"]
|
111 |
+
num_hits += measures["hits"]
|
112 |
+
|
113 |
+
total = num_substitutions + num_deletions + num_hits
|
114 |
+
incorrect = num_substitutions + num_deletions + num_insertions
|
115 |
+
|
116 |
+
if detailed:
|
117 |
+
return dict(
|
118 |
+
wer=incorrect / total,
|
119 |
+
substitution_rate=num_substitutions / total,
|
120 |
+
deletion_rate=num_deletions / total,
|
121 |
+
insertion_rate=num_insertions / total,
|
122 |
+
num_substitutions=num_substitutions,
|
123 |
+
num_deletions=num_deletions,
|
124 |
+
num_insertions=num_insertions,
|
125 |
+
num_hits=num_hits,
|
126 |
+
)
|
127 |
+
return incorrect / total
|