haritzpuerto commited on
Commit
bf87448
1 Parent(s): dce1ca9

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +67 -0
README.md ADDED
@@ -0,0 +1,67 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - question-answering
4
+ - bert
5
+ - adapterhub:qa/squad1
6
+ - adapter-transformers
7
+ datasets:
8
+ - squad
9
+ language:
10
+ - en
11
+ ---
12
+
13
+ # Adapter `AdapterHub/bert-base-uncased-pf-squad` for bert-base-uncased
14
+
15
+ An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [qa/squad1](https://adapterhub.ml/explore/qa/squad1/) dataset and includes a prediction head for question answering.
16
+
17
+ This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
18
+
19
+ ## Usage
20
+
21
+ First, install `adapter-transformers`:
22
+
23
+ ```
24
+ pip install -U adapter-transformers
25
+ ```
26
+ _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
27
+
28
+ Now, the adapter can be loaded and activated like this:
29
+
30
+ ```python
31
+ from transformers import AutoModelWithHeads
32
+
33
+ model = AutoModelWithHeads.from_pretrained("bert-base-uncased")
34
+ adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-squad", source="hf")
35
+ model.active_adapters = adapter_name
36
+ ```
37
+
38
+ ## Architecture & Training
39
+
40
+ The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
41
+ In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
42
+
43
+
44
+ ## Evaluation results
45
+
46
+ Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
47
+
48
+ ## Citation
49
+
50
+ If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
51
+
52
+ ```bibtex
53
+ @inproceedings{poth-etal-2021-pre,
54
+ title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
55
+ author = {Poth, Clifton and
56
+ Pfeiffer, Jonas and
57
+ R{"u}ckl{'e}, Andreas and
58
+ Gurevych, Iryna},
59
+ booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
60
+ month = nov,
61
+ year = "2021",
62
+ address = "Online and Punta Cana, Dominican Republic",
63
+ publisher = "Association for Computational Linguistics",
64
+ url = "https://aclanthology.org/2021.emnlp-main.827",
65
+ pages = "10585--10605",
66
+ }
67
+ ```