Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,35 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
# Multi-lingual Question Generating Model (mt5-base)
|
2 |
Give the model a passage and it will generate a question about the passage.
|
3 |
|
@@ -27,7 +59,7 @@ There is no guarantee that it will produce a question in the language of the pas
|
|
27 |
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
|
28 |
|
29 |
tokenizer = AutoTokenizer.from_pretrained("nbroad/mt5-base-qgen")
|
30 |
-
model = AutoModelForSeq2SeqLM.from_pretrained("nbroad/mt5-base-qgen"
|
31 |
|
32 |
text = "Hugging Face has seen rapid growth in its \
|
33 |
popularity since the get-go. It is definitely doing\
|
@@ -47,32 +79,4 @@ tokenizer.decode(output[0], skip_special_tokens=True)
|
|
47 |
# What is Hugging Face's price point?
|
48 |
```
|
49 |
|
50 |
-
#### Flax version
|
51 |
-
```python
|
52 |
-
from transformers import AutoTokenizer, FlaxAutoModelForSeq2SeqLM
|
53 |
-
|
54 |
-
tokenizer = AutoTokenizer.from_pretrained("nbroad/mt5-base-qgen")
|
55 |
-
model = FlaxAutoModelForSeq2SeqLM.from_pretrained("nbroad/mt5-base-qgen")
|
56 |
-
|
57 |
-
text = "A un año y tres días de que el balón ruede \
|
58 |
-
en el Al Bayt Stadium inaugurando el Mundial 2022, \
|
59 |
-
ya se han dibujado los primeros bocetos de la próxima \
|
60 |
-
Copa del Mundo.13 selecciones están colocadas en el \
|
61 |
-
mapa con la etiqueta de clasificadas y tienen asegurado\
|
62 |
-
pisar los verdes de Qatar en la primera fase final \
|
63 |
-
otoñal. Serbia, Dinamarca, España, Países Bajos, \
|
64 |
-
Suiza, Croacia, Francia, Inglaterra, Bélgica, Alemania,\
|
65 |
-
Brasil, Argentina y Qatar, como anfitriona, entrarán en \
|
66 |
-
el sorteo del 1 de abril de 2022 en Doha en el que 32 \
|
67 |
-
países serán repartidos en sus respectivos grupos. \
|
68 |
-
"
|
69 |
-
|
70 |
-
inputs = tokenizer(text, return_tensors="pt")
|
71 |
-
output = model.generate(**inputs, max_length=40)
|
72 |
-
|
73 |
-
tokenizer.decode(output["sequences"][0], skip_special_tokens=True)
|
74 |
-
# ¿Cuántos países entrarán en el sorteo del Mundial 2022?
|
75 |
-
```
|
76 |
-
|
77 |
-
|
78 |
Model trained on Cloud TPUs from Google's TPU Research Cloud (TRC)
|
|
|
1 |
+
---
|
2 |
+
datasets:
|
3 |
+
- squad_v2
|
4 |
+
- tydiqa
|
5 |
+
- mlqa
|
6 |
+
- xquad
|
7 |
+
- germanquad
|
8 |
+
language:
|
9 |
+
- en
|
10 |
+
- hi
|
11 |
+
- de
|
12 |
+
- ar
|
13 |
+
- bn
|
14 |
+
- fi
|
15 |
+
- ja
|
16 |
+
- zh
|
17 |
+
- id
|
18 |
+
- sw
|
19 |
+
- ta
|
20 |
+
- gr
|
21 |
+
- ru
|
22 |
+
- es
|
23 |
+
- th
|
24 |
+
- tr
|
25 |
+
- vi
|
26 |
+
widget:
|
27 |
+
- text: "Hugging Face has seen rapid growth in its popularity since the get-go. It is definitely doing the right things to attract more and more people to its platform, some of which are on the following lines: Community driven approach through large open source repositories along with paid services. Helps to build a network of like-minded people passionate about open source. Attractive price point. The subscription-based features, e.g.: Inference based API, starts at a price of $9/month."
|
28 |
+
example_title: "English"
|
29 |
+
- text: "A un año y tres días de que el balón ruede en el Al Bayt Stadium inaugurando el Mundial 2022, ya se han dibujado los primeros bocetos de la próxima Copa del Mundo.13 selecciones están colocadas en el mapa con la etiqueta de clasificadas y tienen asegurado pisar los verdes de Qatar en la primera fase final otoñal. Serbia, Dinamarca, España, Países Bajos, Suiza, Croacia, Francia, Inglaterra, Bélgica, Alemania, Brasil, Argentina y Qatar, como anfitriona, entrarán en el sorteo del 1 de abril de 2022 en Doha en el que 32 países serán repartidos en sus respectivos grupos. "
|
30 |
+
example_title: "Spanish"
|
31 |
+
|
32 |
+
---
|
33 |
# Multi-lingual Question Generating Model (mt5-base)
|
34 |
Give the model a passage and it will generate a question about the passage.
|
35 |
|
|
|
59 |
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
|
60 |
|
61 |
tokenizer = AutoTokenizer.from_pretrained("nbroad/mt5-base-qgen")
|
62 |
+
model = AutoModelForSeq2SeqLM.from_pretrained("nbroad/mt5-base-qgen")
|
63 |
|
64 |
text = "Hugging Face has seen rapid growth in its \
|
65 |
popularity since the get-go. It is definitely doing\
|
|
|
79 |
# What is Hugging Face's price point?
|
80 |
```
|
81 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
82 |
Model trained on Cloud TPUs from Google's TPU Research Cloud (TRC)
|