phlippseitz commited on
Commit
869c2d2
1 Parent(s): 9360f08

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +61 -0
README.md ADDED
@@ -0,0 +1,61 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - onnx
4
+ - question-answering
5
+ - bert
6
+ - adapterhub:qa/squad2
7
+ - adapter-transformers
8
+ datasets:
9
+ - squad_v2
10
+ language:
11
+ - en
12
+ ---
13
+
14
+ # ONNX export of Adapter `AdapterHub/bert-base-uncased-pf-squad_v2` for bert-base-uncased
15
+ ## Conversion of [AdapterHub/bert-base-uncased-pf-squad_v2](https://huggingface.co/AdapterHub/bert-base-uncased-pf-squad_v2) for UKP SQuARE
16
+
17
+
18
+ ## Usage
19
+ ```python
20
+ onnx_path = hf_hub_download(repo_id='UKP-SQuARE/bert-base-uncased-pf-squad_v2-onnx', filename='model.onnx') # or model_quant.onnx for quantization
21
+ onnx_model = InferenceSession(onnx_path, providers=['CPUExecutionProvider'])
22
+
23
+ context = 'ONNX is an open format to represent models. The benefits of using ONNX include interoperability of frameworks and hardware optimization.'
24
+ question = 'What are advantages of ONNX?'
25
+ tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased')
26
+
27
+ inputs = tokenizer(question, context, padding=True, truncation=True, return_tensors='np')
28
+ inputs_int64 = {key: np.array(inputs[key], dtype=np.int64) for key in inputs}
29
+ outputs = onnx_model.run(input_feed=dict(inputs_int64), output_names=None)
30
+ ```
31
+
32
+ ## Architecture & Training
33
+
34
+ The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
35
+ In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
36
+
37
+
38
+ ## Evaluation results
39
+
40
+ Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
41
+
42
+ ## Citation
43
+
44
+ If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
45
+
46
+ ```bibtex
47
+ @inproceedings{poth-etal-2021-pre,
48
+ title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
49
+ author = {Poth, Clifton and
50
+ Pfeiffer, Jonas and
51
+ R{"u}ckl{'e}, Andreas and
52
+ Gurevych, Iryna},
53
+ booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
54
+ month = nov,
55
+ year = "2021",
56
+ address = "Online and Punta Cana, Dominican Republic",
57
+ publisher = "Association for Computational Linguistics",
58
+ url = "https://aclanthology.org/2021.emnlp-main.827",
59
+ pages = "10585--10605",
60
+ }
61
+ ```