File size: 2,885 Bytes
466ac7c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
---
thumbnail: https://github.com/rinnakk/japanese-pretrained-models/blob/master/rinna.png
language: ja
license: apache-2.0
datasets: reazon-research/reazonspeech
inference: false
tags:
  - data2vec
  - speech
---

# `rinna/japanese-data2vec-audio-base`

![rinna-icon](./rinna.png)

# Overview

This is a Japanese data2vec Audio Base model trained by [rinna Co., Ltd.](https://rinna.co.jp/)

* **Model summary**

  The model architecture is the same as the [original data2vec Audio Base model](https://huggingface.co/facebook/data2vec-audio-base), which contains 12 transformer layers with 12 attention heads.
  The model was trained using code from the [official repository](https://github.com/facebookresearch/fairseq/tree/main/examples/data2vec#data2vec), and the detailed training configuration can be found in the same repository and the [original paper](https://ai.meta.com/research/data2vec-a-general-framework-for-self-supervised-learning-in-speech-vision-and-language/).


* **Training**

  The model was trained on approximately 19,000 hours of following Japanese speech corpus ReazonSpeech v1.
  - [ReazonSpeech](https://huggingface.co/datasets/reazon-research/reazonspeech)

* **Contributors**

  - [Yukiya Hono](https://huggingface.co/yky-h)
  - [Kentaro Mitsui](https://huggingface.co/Kentaro321)
  - [Kei Sawada](https://huggingface.co/keisawada)
    
---

# How to use the model

```python
import soundfile as sf
from transformers import AutoFeatureExtractor, AutoModel

model_name = "rinna/japanese-data2vec-audio-base"
feature_extractor = AutoFeatureExtractor.from_pretrained(model_name)
model = AutoModel.from_pretrained(model_name)
model.eval()

raw_speech_16kHz, sr = sf.read(audio_file)
inputs = feature_extractor(
    raw_speech_16kHz,
    return_tensors="pt",
    sampling_rate=sr,
)
outputs = model(**inputs)

print(f"Input:  {inputs.input_values.size()}")  # [1, #samples]
print(f"Output: {outputs.last_hidden_state.size()}")  # [1, #frames, 768]
```

A fairseq checkpoint file can also be available [here](https://huggingface.co/rinna/japanese-data2vec-audio-base/tree/main/fairseq).

---

# How to cite
```bibtex
@misc{rinna-japanese-data2vec-audio-base, 
  title={rinna/japanese-data2vec-audio-base}, 
  author={Hono, Yukiya and Mitsui, Kentaro and Sawada, Kei},
  url={https://huggingface.co/rinna/japanese-data2vec-audio-base}
}
```

---

# Citations
```bibtex
@inproceedings{baevski2022data2vec,
  title={Data2vec: A general framework for self-supervised learning in speech, vision and language},
  author={Baevski, Alexei and Hsu, Wei-Ning and Xu, Qiantong and Babu, Arun and Gu, Jiatao and Auli, Michael},
  booktitle={International Conference on Machine Learning},
  pages={1298--1312},
  year={2022},
  organization={PMLR},
  doi={10.48550/arXiv.2202.03555}
}
```
---

# License
[The Apache 2.0 license](https://www.apache.org/licenses/LICENSE-2.0)