voidful commited on
Commit
e85b632
1 Parent(s): 9308ee2

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +34 -0
README.md ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: en
3
+ tags:
4
+ - bart
5
+ - question
6
+ - generation
7
+ - seq2seq
8
+ datasets:
9
+ - eqg-race
10
+ metrics:
11
+ - bleu
12
+ - rouge
13
+ pipeline_tag: text2text-generation
14
+ widget:
15
+ - text: "When you ' re having a holiday , one of the main questions to ask is which hotel or apartment to choose . However , when it comes to France , you have another special choice : treehouses . In France , treehouses are offered to travelers as a new choice in many places . The price may be a little higher , but you do have a chance to _ your childhood memories . Alain Laurens , one of France ' s top treehouse designers , said , ' Most of the people might have the experience of building a den when they were young . And they like that feeling of freedom when they are children . ' Its fairy - tale style gives travelers a special feeling . It seems as if they are living as a forest king and enjoying the fresh air in the morning . Another kind of treehouse is the ' star cube ' . It gives travelers the chance of looking at the stars shining in the sky when they are going to sleep . Each ' star cube ' not only offers all the comfortable things that a hotel provides for travelers , but also gives them a chance to look for stars by using a telescope . The glass roof allows you to look at the stars from your bed . "
16
+ ---
17
+ # voidful/bart-eqg-question-generator
18
+
19
+ ## Model description
20
+
21
+ This model is a sequence-to-sequence question generator with only the context as an input, and generates a question as an output.
22
+ It is based on a pretrained `bart-base` model, and trained on [EQG-RACE](https://github.com/jemmryx/EQG-RACE) corpus.
23
+
24
+ ## Intended uses & limitations
25
+
26
+ The model is trained to generate examinations-style multiple choice question.
27
+
28
+ #### How to use
29
+
30
+ The model takes context as an input sequence, and will generate a question as an output sequence. The max sequence length is 1024 tokens. Inputs should be organised into the following format:
31
+ ```
32
+ context
33
+ ```
34
+ The input sequence can then be encoded and passed as the `input_ids` argument in the model's `generate()` method.