HackerMonica
commited on
Commit
•
9ee9d47
1
Parent(s):
83ab0fc
write README.md refering to cifope/nllb-200-wo-fr-distilled-600M
Browse files
README.md
CHANGED
@@ -1,3 +1,52 @@
|
|
1 |
-
---
|
2 |
-
license: cc-by-nc-4.0
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: cc-by-nc-4.0
|
3 |
+
language:
|
4 |
+
- en
|
5 |
+
- zh
|
6 |
+
metrics:
|
7 |
+
- bleu
|
8 |
+
pipeline_tag: translation
|
9 |
+
---
|
10 |
+
|
11 |
+
# Model Documentation: English to Simplified Chinese Translation with NLLB-200-distilled-600M
|
12 |
+
|
13 |
+
## Model Overview
|
14 |
+
|
15 |
+
This document describes a machine translation model fine-tuned from Meta's NLLB-200-distilled-600M for translating from English to Simplified Chinese. The model, hosted at `HackerMonica/nllb-200-distilled-600M-en-zh_CN`, utilizes a distilled version of the NLLB-200 model which has been specifically optimized for translation tasks between the English and Simplified Chinese languages.
|
16 |
+
|
17 |
+
## Dependencies
|
18 |
+
|
19 |
+
The model requires the `transformers` library by Hugging Face. Ensure that you have the library installed:
|
20 |
+
|
21 |
+
```bash
|
22 |
+
pip install transformers
|
23 |
+
```
|
24 |
+
|
25 |
+
## Setup
|
26 |
+
|
27 |
+
Import necessary classes from the `transformers` library:
|
28 |
+
|
29 |
+
```python
|
30 |
+
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
|
31 |
+
```
|
32 |
+
|
33 |
+
Initialize the model and tokenizer:
|
34 |
+
|
35 |
+
```python
|
36 |
+
model = AutoModelForSeq2SeqLM.from_pretrained('HackerMonica/nllb-200-distilled-600M-en-zh_CN')
|
37 |
+
tokenizer = AutoTokenizer.from_pretrained('HackerMonica/nllb-200-distilled-600M-en-zh_CN')
|
38 |
+
```
|
39 |
+
|
40 |
+
## Usage
|
41 |
+
|
42 |
+
To use the model for translating text, use python code below to translate text from English to Simplified Chinese:
|
43 |
+
|
44 |
+
```python
|
45 |
+
def translate(text):
|
46 |
+
inputs = tokenizer(text, return_tensors="pt").to("cuda")
|
47 |
+
translated_tokens = model.generate(
|
48 |
+
**inputs, forced_bos_token_id=tokenizer.lang_code_to_id["zho_Hans"], max_length=300
|
49 |
+
)
|
50 |
+
translation = tokenizer.batch_decode(translated_tokens, skip_special_tokens=True)[0]
|
51 |
+
return translation
|
52 |
+
```
|