domenicrosati commited on
Commit
e964f38
·
2 Parent(s): 2430523 a7ed627

Merge branch 'main' of https://huggingface.co/domenicrosati/question_converter-3b

Browse files
Files changed (1) hide show
  1. README.md +5 -2
README.md CHANGED
@@ -16,7 +16,7 @@ See:
16
 
17
  ## TL;DR:
18
  We fine-tune a seq2seq model,
19
- T5-3B (Raffel et al., 2020), using the $(a, q, d)$ pairs
20
  annotated by Demszky et al. (2018).
21
 
22
  Where a is answer, q is question, and d is declerative sentence (i.e. a statement).
@@ -25,6 +25,8 @@ See Appendex B.2 of Chen et al. for more.
25
 
26
  ## Usage
27
 
 
 
28
  ```python
29
  from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
30
 
@@ -32,10 +34,11 @@ tokenizer = AutoTokenizer.from_pretrained('domenicrosati/question_converter-3b')
32
  model = AutoModelForSeq2SeqLM.from_pretrained('domenicrosati/question_converter-3b')
33
 
34
  question = "Where in the world is Carmen Sandiego?"
35
- answer = "She is in D'Abruzzo"
36
 
37
  prompt = f'{question} </s> {answer}'
38
  input_ids = tokenizer(prompt, return_tensors='pt').input_ids
39
  output_ids = model.generate(input_ids)
40
  responses = tokenizer.batch_decode(output_ids, skip_special_tokens=True)
41
  ```
 
 
16
 
17
  ## TL;DR:
18
  We fine-tune a seq2seq model,
19
+ T5-3B (Raffel et al., 2020), using the (a, q, d) pairs
20
  annotated by Demszky et al. (2018).
21
 
22
  Where a is answer, q is question, and d is declerative sentence (i.e. a statement).
 
25
 
26
  ## Usage
27
 
28
+ The prompt should be `{question} {seperator} {answer}` where the seperator is `</s>`.
29
+
30
  ```python
31
  from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
32
 
 
34
  model = AutoModelForSeq2SeqLM.from_pretrained('domenicrosati/question_converter-3b')
35
 
36
  question = "Where in the world is Carmen Sandiego?"
37
+ answer = "She is in Abruzzo"
38
 
39
  prompt = f'{question} </s> {answer}'
40
  input_ids = tokenizer(prompt, return_tensors='pt').input_ids
41
  output_ids = model.generate(input_ids)
42
  responses = tokenizer.batch_decode(output_ids, skip_special_tokens=True)
43
  ```
44
+ > `['Carmen Sandiego is in Abruzzo.']`