Update README.md
Browse files
README.md
CHANGED
@@ -1,199 +1,180 @@
|
|
|
|
1 |
---
|
2 |
library_name: transformers
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
4 |
---
|
5 |
|
6 |
-
# Model Card for
|
7 |
-
|
8 |
-
<!-- Provide a quick summary of what the model is/does. -->
|
9 |
-
|
10 |
-
|
11 |
|
12 |
## Model Details
|
13 |
|
14 |
### Model Description
|
15 |
|
16 |
-
|
17 |
-
|
18 |
-
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
|
19 |
|
20 |
-
- **Developed by:** [
|
21 |
-
- **
|
22 |
-
- **
|
23 |
-
- **
|
24 |
-
- **
|
25 |
-
- **License:** [More Information Needed]
|
26 |
-
- **Finetuned from model [optional]:** [More Information Needed]
|
27 |
|
28 |
-
### Model Sources
|
29 |
|
30 |
-
|
31 |
-
|
32 |
-
- **Repository:** [More Information Needed]
|
33 |
-
- **Paper [optional]:** [More Information Needed]
|
34 |
-
- **Demo [optional]:** [More Information Needed]
|
35 |
|
36 |
## Uses
|
37 |
|
38 |
-
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
39 |
-
|
40 |
### Direct Use
|
41 |
|
42 |
-
|
43 |
-
|
44 |
-
[More Information Needed]
|
45 |
|
46 |
### Downstream Use [optional]
|
47 |
|
48 |
-
|
49 |
-
|
50 |
-
[More Information Needed]
|
51 |
|
52 |
### Out-of-Scope Use
|
53 |
|
54 |
-
|
55 |
-
|
56 |
-
[More Information Needed]
|
57 |
|
58 |
## Bias, Risks, and Limitations
|
59 |
|
60 |
-
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
|
61 |
-
|
62 |
-
[More Information Needed]
|
63 |
-
|
64 |
### Recommendations
|
65 |
|
66 |
-
|
67 |
-
|
68 |
-
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
|
69 |
|
70 |
## How to Get Started with the Model
|
71 |
|
72 |
-
|
73 |
-
|
74 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
75 |
|
76 |
## Training Details
|
77 |
|
78 |
### Training Data
|
79 |
|
80 |
-
|
81 |
-
|
82 |
-
[More Information Needed]
|
83 |
|
84 |
### Training Procedure
|
85 |
|
86 |
-
|
87 |
-
|
88 |
-
#### Preprocessing [optional]
|
89 |
-
|
90 |
-
[More Information Needed]
|
91 |
|
|
|
92 |
|
93 |
#### Training Hyperparameters
|
94 |
|
95 |
-
- **Training regime:**
|
96 |
-
|
97 |
-
|
98 |
-
|
99 |
-
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
|
100 |
-
|
101 |
-
[More Information Needed]
|
102 |
|
103 |
## Evaluation
|
104 |
|
105 |
-
<!-- This section describes the evaluation protocols and provides the results. -->
|
106 |
-
|
107 |
### Testing Data, Factors & Metrics
|
108 |
|
109 |
#### Testing Data
|
110 |
|
111 |
-
|
112 |
-
|
113 |
-
[More Information Needed]
|
114 |
-
|
115 |
-
#### Factors
|
116 |
-
|
117 |
-
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
|
118 |
-
|
119 |
-
[More Information Needed]
|
120 |
|
121 |
#### Metrics
|
122 |
|
123 |
-
|
124 |
-
|
125 |
-
[More Information Needed]
|
126 |
|
127 |
### Results
|
128 |
|
129 |
-
|
130 |
|
131 |
#### Summary
|
132 |
|
133 |
-
|
134 |
-
|
135 |
-
## Model Examination [optional]
|
136 |
-
|
137 |
-
<!-- Relevant interpretability work for the model goes here -->
|
138 |
-
|
139 |
-
[More Information Needed]
|
140 |
|
141 |
## Environmental Impact
|
142 |
|
143 |
-
|
144 |
-
|
145 |
-
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
146 |
|
147 |
-
- **Hardware Type:**
|
148 |
-
- **Hours used:**
|
149 |
-
- **Cloud Provider:** [More Information Needed]
|
150 |
-
- **Compute Region:** [More Information Needed]
|
151 |
-
- **Carbon Emitted:** [More Information Needed]
|
152 |
|
153 |
-
## Technical Specifications
|
154 |
|
155 |
### Model Architecture and Objective
|
156 |
|
157 |
-
|
158 |
|
159 |
### Compute Infrastructure
|
160 |
|
161 |
-
[More Information Needed]
|
162 |
-
|
163 |
#### Hardware
|
164 |
|
165 |
-
|
166 |
|
167 |
#### Software
|
168 |
|
169 |
-
|
|
|
170 |
|
171 |
-
## Citation [optional]
|
172 |
-
|
173 |
-
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
174 |
|
175 |
**BibTeX:**
|
176 |
|
177 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
178 |
|
179 |
**APA:**
|
180 |
|
181 |
-
|
182 |
-
|
183 |
-
## Glossary [optional]
|
184 |
-
|
185 |
-
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
|
186 |
-
|
187 |
-
[More Information Needed]
|
188 |
-
|
189 |
-
## More Information [optional]
|
190 |
-
|
191 |
-
[More Information Needed]
|
192 |
|
193 |
## Model Card Authors [optional]
|
194 |
|
195 |
-
[
|
196 |
|
197 |
## Model Card Contact
|
198 |
|
199 |
-
[
|
|
|
1 |
+
|
2 |
---
|
3 |
library_name: transformers
|
4 |
+
datasets:
|
5 |
+
- kresnik/zeroth_korean
|
6 |
+
language:
|
7 |
+
- ko
|
8 |
+
metrics:
|
9 |
+
- cer
|
10 |
---
|
11 |
|
12 |
+
# Model Card for wav2vec2-base-korean
|
|
|
|
|
|
|
|
|
13 |
|
14 |
## Model Details
|
15 |
|
16 |
### Model Description
|
17 |
|
18 |
+
This model is a fine-tuned version of Facebook's wav2vec2-base model, adapted for Korean language recognition using the Zeroth-Korean dataset. The model has been trained to transcribe Korean speech into text, specifically utilizing the unique jamo characters of the Korean language.
|
|
|
|
|
19 |
|
20 |
+
- **Developed by:** [jeonghyeon Park, Jaeyoung Kim]
|
21 |
+
- **Model type:** Speech-to-Text
|
22 |
+
- **Language(s) (NLP):** Korean
|
23 |
+
- **License:** Apache 2.0
|
24 |
+
- **Finetuned from model [optional]:** facebook/wav2vec2-base
|
|
|
|
|
25 |
|
26 |
+
### Model Sources
|
27 |
|
28 |
+
- **Repository:** [github.com/KkonJJ/wav2vec2-base-korean]
|
|
|
|
|
|
|
|
|
29 |
|
30 |
## Uses
|
31 |
|
|
|
|
|
32 |
### Direct Use
|
33 |
|
34 |
+
The model can be directly used for transcribing Korean speech to text without additional fine-tuning. It is particularly useful for applications requiring accurate Korean language recognition such as voice assistants, transcription services, and language learning tools.
|
|
|
|
|
35 |
|
36 |
### Downstream Use [optional]
|
37 |
|
38 |
+
This model can be integrated into larger systems that require speech recognition capabilities, such as automated customer service, voice-controlled applications, and more.
|
|
|
|
|
39 |
|
40 |
### Out-of-Scope Use
|
41 |
|
42 |
+
This model is not suitable for recognizing languages other than Korean or for tasks that require understanding context beyond the transcription of spoken Korean.
|
|
|
|
|
43 |
|
44 |
## Bias, Risks, and Limitations
|
45 |
|
|
|
|
|
|
|
|
|
46 |
### Recommendations
|
47 |
|
48 |
+
Users should be aware of the limitations of the model, including potential biases in the training data which may affect the accuracy for certain dialects or speakers. It is recommended to evaluate the model's performance on a representative sample of the intended application domain.
|
|
|
|
|
49 |
|
50 |
## How to Get Started with the Model
|
51 |
|
52 |
+
To get started with the model, use the code below:
|
53 |
+
|
54 |
+
```python
|
55 |
+
!pip install transformers[torch] accelerate -U
|
56 |
+
!pip install datasets torchaudio -U
|
57 |
+
!pip install jiwer jamo
|
58 |
+
!pip install tensorboard
|
59 |
+
|
60 |
+
import torch
|
61 |
+
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
|
62 |
+
import torchaudio
|
63 |
+
from jamo import h2j, j2hcj
|
64 |
+
|
65 |
+
model_name = "Kkonjeong/wav2vec2-base-korean"
|
66 |
+
model = Wav2Vec2ForCTC.from_pretrained(model_name)
|
67 |
+
processor = Wav2Vec2Processor.from_pretrained(model_name)
|
68 |
+
|
69 |
+
model.to("cuda")
|
70 |
+
model.eval()
|
71 |
+
|
72 |
+
def load_and_preprocess_audio(file_path):
|
73 |
+
speech_array, sampling_rate = torchaudio.load(file_path)
|
74 |
+
if sampling_rate != 16000:
|
75 |
+
resampler = torchaudio.transforms.Resample(sampling_rate, 16000)
|
76 |
+
speech_array = resampler(speech_array)
|
77 |
+
input_values = processor(speech_array.squeeze().numpy(), sampling_rate=16000).input_values[0]
|
78 |
+
return input_values
|
79 |
+
|
80 |
+
def predict(file_path):
|
81 |
+
input_values = load_and_preprocess_audio(file_path)
|
82 |
+
input_values = torch.tensor(input_values).unsqueeze(0).to("cuda")
|
83 |
+
with torch.no_grad():
|
84 |
+
logits = model(input_values).logits
|
85 |
+
predicted_ids = torch.argmax(logits, dim=-1)
|
86 |
+
transcription = processor.batch_decode(predicted_ids)[0]
|
87 |
+
return transcription
|
88 |
+
|
89 |
+
audio_file_path = "your_audio_file.wav"
|
90 |
+
transcription = predict(audio_file_path)
|
91 |
+
print("Transcription:", transcription)
|
92 |
+
```
|
93 |
|
94 |
## Training Details
|
95 |
|
96 |
### Training Data
|
97 |
|
98 |
+
The model was trained using the Zeroth-Korean dataset, a collection of Korean speech data. This dataset includes audio recordings and their corresponding transcriptions.
|
|
|
|
|
99 |
|
100 |
### Training Procedure
|
101 |
|
102 |
+
#### Preprocessing
|
|
|
|
|
|
|
|
|
103 |
|
104 |
+
Special characters were removed from the transcriptions, and the text was converted to jamo characters to better align with the Korean language's phonetic structure.
|
105 |
|
106 |
#### Training Hyperparameters
|
107 |
|
108 |
+
- **Training regime:** Mixed precision (fp16)
|
109 |
+
- **Batch size:** 32
|
110 |
+
- **Learning rate:** 1e-4
|
111 |
+
- **Number of epochs:** 10
|
|
|
|
|
|
|
112 |
|
113 |
## Evaluation
|
114 |
|
|
|
|
|
115 |
### Testing Data, Factors & Metrics
|
116 |
|
117 |
#### Testing Data
|
118 |
|
119 |
+
The model was evaluated using the test split of the Zeroth-Korean dataset.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
120 |
|
121 |
#### Metrics
|
122 |
|
123 |
+
The primary evaluation metric used was the Character Error Rate (CER), which measures the percentage of characters that are incorrect in the transcription compared to the reference text.
|
|
|
|
|
124 |
|
125 |
### Results
|
126 |
|
127 |
+
- **Final CER:** 0.073
|
128 |
|
129 |
#### Summary
|
130 |
|
131 |
+
The model achieved a CER of 7.3%, indicating good performance on the Zeroth-Korean dataset.
|
|
|
|
|
|
|
|
|
|
|
|
|
132 |
|
133 |
## Environmental Impact
|
134 |
|
135 |
+
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute).
|
|
|
|
|
136 |
|
137 |
+
- **Hardware Type:** NVIDIA A100
|
138 |
+
- **Hours used:** Approximately 8hours
|
|
|
|
|
|
|
139 |
|
140 |
+
## Technical Specifications
|
141 |
|
142 |
### Model Architecture and Objective
|
143 |
|
144 |
+
The model architecture is based on wav2vec2.0, designed to convert audio input into text output by modeling the phonetic structure of speech.
|
145 |
|
146 |
### Compute Infrastructure
|
147 |
|
|
|
|
|
148 |
#### Hardware
|
149 |
|
150 |
+
- **GPUs:** NVIDIA A100
|
151 |
|
152 |
#### Software
|
153 |
|
154 |
+
- **Framework:** PyTorch
|
155 |
+
- **Libraries:** Transformers, Datasets, Torchaudio, Jiwer, Jamo
|
156 |
|
|
|
|
|
|
|
157 |
|
158 |
**BibTeX:**
|
159 |
|
160 |
+
```bibtex
|
161 |
+
@misc{your_bibtex_key,
|
162 |
+
author = {Your Name},
|
163 |
+
title = {wav2vec2-base-korean},
|
164 |
+
year = {2024},
|
165 |
+
publisher = {Hugging Face},
|
166 |
+
note = {https://huggingface.co/Kkonjeong/wav2vec2-base-korean}
|
167 |
+
}
|
168 |
+
```
|
169 |
|
170 |
**APA:**
|
171 |
|
172 |
+
Your Name. (2024). wav2vec2-base-korean. Hugging Face. https://huggingface.co/Kkonjeong/wav2vec2-base-korean
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
173 |
|
174 |
## Model Card Authors [optional]
|
175 |
|
176 |
+
[Your Name]
|
177 |
|
178 |
## Model Card Contact
|
179 |
|
180 |
+
For more information, contact [shshjhjh4455@gmail.com, kbs00717@gmail.com].
|