Question Answering
Transformers
PyTorch
English
bert
Inference Endpoints
haritzpuerto commited on
Commit
0297073
1 Parent(s): 127b15f

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +56 -0
README.md ADDED
@@ -0,0 +1,56 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ tags:
5
+ - question-answering
6
+ license: "apache-2.0"
7
+ datasets:
8
+ - squad
9
+ - newsqa
10
+ - hotpotqa
11
+ - searchqa
12
+ - triviaqa-web
13
+ - naturalquestions
14
+ - qamr
15
+ - duorc
16
+ - boolq
17
+ - commonsense_qa
18
+ - hellaswag
19
+ - race
20
+ - social_i_qa
21
+ - drop
22
+ - narrativeqa
23
+ - hybrid_qa
24
+ metrics:
25
+ - squad
26
+ - accuracy
27
+ ---
28
+
29
+ # Description
30
+ Checkpoint of MetaQA from MetaQA: Combining Expert Agents for Multi-Skill Question Answering (https://arxiv.org/abs/2112.01922)
31
+
32
+ # How to Use
33
+
34
+ ```
35
+ from inference import MetaQA, PredictionRequest
36
+ metaqa = MetaQA("haritzpuerto/MetaQA")
37
+ # run the QA Agents with the input question and context. For this example, I will show mockup outputs from extractive QA agents.
38
+ list_preds = [('Utah', 0.1442876160144806),
39
+ ('DOC] [TLE] 1886', 0.10822545737028122),
40
+ ('Utah Territory', 0.6455602645874023),
41
+ ('Eli Murray opposed the', 0.352359801530838),
42
+ ('Utah', 0.48052430152893066),
43
+ ('Utah Territory', 0.35186105966567993),
44
+ ('Utah', 0.8328599333763123),
45
+ ('Utah', 0.3405868709087372),
46
+ ]
47
+ # add ("", 0.0) to the list of predictions until the size is 16 (because MetaQA was trained on 16 datasets/agents including other formats, not only extractive)
48
+ for i in range(16-len(list_preds)):
49
+ list_preds.append(("", 0.0))
50
+
51
+ request = PredictionRequest()
52
+ request.input_question = "While serving as Governor of this territory, 1880-1886, Eli Murray opposed the advancement of polygamy?"
53
+ request.input_predictions = list_preds
54
+
55
+ (pred, agent_name, metaqa_score, agent_score) = metaqa.run_metaqa(request)
56
+ ```