julien-c HF staff commited on
Commit
72f1b1a
1 Parent(s): ffe7b89

Migrate model card from transformers-repo

Browse files

Read announcement at https://discuss.huggingface.co/t/announcement-all-model-cards-will-be-migrated-to-hf-co-model-repos/2755
Original file history: https://github.com/huggingface/transformers/commits/master/model_cards/google/bert2bert_L-24_wmt_en_de/README.md

Files changed (1) hide show
  1. README.md +38 -0
README.md ADDED
@@ -0,0 +1,38 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ - de
5
+ license: apache-2.0
6
+ datasets:
7
+ - wmt14
8
+ tags:
9
+ - translation
10
+ ---
11
+
12
+ # bert2bert_L-24_wmt_en_de EncoderDecoder model
13
+
14
+ The model was introduced in
15
+ [this paper](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn and first released in [this repository](https://tfhub.dev/google/bertseq2seq/bert24_en_de/1).
16
+
17
+ The model is an encoder-decoder model that was initialized on the `bert-large` checkpoints for both the encoder
18
+ and decoder and fine-tuned on English to German translation on the WMT dataset, which is linked above.
19
+
20
+ Disclaimer: The model card has been written by the Hugging Face team.
21
+
22
+ ## How to use
23
+
24
+ You can use this model for translation, *e.g.*
25
+
26
+ ```python
27
+ from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
28
+
29
+ tokenizer = AutoTokenizer.from_pretrained("google/bert2bert_L-24_wmt_en_de", pad_token="<pad>", eos_token="</s>", bos_token="<s>")
30
+ model = AutoModelForSeq2SeqLM.from_pretrained("google/bert2bert_L-24_wmt_en_de")
31
+
32
+ sentence = "Would you like to grab a coffee with me this week?"
33
+
34
+ input_ids = tokenizer(sentence, return_tensors="pt", add_special_tokens=False).input_ids
35
+ output_ids = model.generate(input_ids)[0]
36
+ print(tokenizer.decode(output_ids, skip_special_tokens=True))
37
+ # should output
38
+ # Möchten Sie diese Woche einen Kaffee mit mir schnappen?