Spaces:
Running
Running
Improve description
Browse files
README.md
CHANGED
@@ -13,18 +13,22 @@
|
|
13 |
<br>
|
14 |
</p>
|
15 |
|
16 |
-
Easy-translate is a script for translating large text files in your machine using
|
|
|
|
|
|
|
17 |
|
18 |
M2M100 is a multilingual encoder-decoder (seq-to-seq) model trained for Many-to-Many multilingual translation.
|
19 |
It was introduced in this [paper](https://arxiv.org/abs/2010.11125) and first released in [this](https://github.com/pytorch/fairseq/tree/master/examples/m2m_100) repository.
|
20 |
The model that can directly translate between the 9,900 directions of 100 languages.
|
21 |
|
22 |
-
Easy-Translate is built on top of 🤗HuggingFace's
|
23 |
[Transformers](https://huggingface.co/docs/transformers/index) and
|
24 |
-
🤗HuggingFace's [Accelerate](https://huggingface.co/docs/accelerate/index) library.
|
|
|
25 |
|
26 |
-
* CPU / GPU / multi-GPU / TPU acceleration
|
27 |
-
* BF16 / FP16 /
|
28 |
* Automatic batch size finder: Forget CUDA OOM errors. Set an initial batch size, if it doesn't fit, we will automatically adjust it.
|
29 |
* Sharded Data Parallel to load huge models sharded on multiple GPUs (See: https://huggingface.co/docs/accelerate/fsdp).
|
30 |
|
|
|
13 |
<br>
|
14 |
</p>
|
15 |
|
16 |
+
Easy-translate is a script for translating large text files in your machine using
|
17 |
+
the [M2M100 models](https://arxiv.org/pdf/2010.11125.pdf) from Facebook/Meta AI.
|
18 |
+
We also privide a [script](#evaluate-translations) for Easy-Evaluation of your translations 🥳
|
19 |
+
|
20 |
|
21 |
M2M100 is a multilingual encoder-decoder (seq-to-seq) model trained for Many-to-Many multilingual translation.
|
22 |
It was introduced in this [paper](https://arxiv.org/abs/2010.11125) and first released in [this](https://github.com/pytorch/fairseq/tree/master/examples/m2m_100) repository.
|
23 |
The model that can directly translate between the 9,900 directions of 100 languages.
|
24 |
|
25 |
+
Easy-Translate is built on top of 🤗HuggingFace's
|
26 |
[Transformers](https://huggingface.co/docs/transformers/index) and
|
27 |
+
🤗HuggingFace's [Accelerate](https://huggingface.co/docs/accelerate/index) library.
|
28 |
+
We support:
|
29 |
|
30 |
+
* CPU / multi-CPU / GPU / multi-GPU / TPU acceleration
|
31 |
+
* BF16 / FP16 / FP32 precision.
|
32 |
* Automatic batch size finder: Forget CUDA OOM errors. Set an initial batch size, if it doesn't fit, we will automatically adjust it.
|
33 |
* Sharded Data Parallel to load huge models sharded on multiple GPUs (See: https://huggingface.co/docs/accelerate/fsdp).
|
34 |
|