File size: 1,068 Bytes
254d5e1
ece0d7e
 
 
254d5e1
ece0d7e
254d5e1
ece0d7e
 
daf2509
ece0d7e
6988045
ece0d7e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
---

language: en

license: apache-2.0

---


# 🏬QAmden🏬: Question-Answering-based Multi-DocumENt model

HF-version of the QAmden model: *Peek Across*: Improving Multi-Document Modeling via Cross-Document Question-Answering (ACL 2023). 


You can use it by 

```python
from transformers import (
    AutoTokenizer,
    LEDConfig,
    LEDForConditionalGeneration,
)
# load model and tokenizer
tokenizer = AutoTokenizer.from_pretrained('biu-nlp/QAmden')
config=LEDConfig.from_pretrained('biu-nlp/QAmden')

model = LEDForConditionalGeneration.from_pretrained('biu-nlp/QAmden')
```

The original repo is [here](https://github.com/aviclu/peekacross).

If you find our work useful, please cite the paper as:

```python
@article{caciularu2023peekacross,
  title={Peek Across: Improving Multi-Document Modeling via Cross-Document Question-Answering},
  author={Caciularu, Avi and Peters, Matthew E and Goldberger, Jacob and Dagan, Ido and Cohan, Arman},
  journal={The 61st Annual Meeting of the Association for Computational Linguistics: ACL 2023},
  year={2023}
}
```