Edit model card

🏬QAmden🏬: Question-Answering-based Multi-DocumENt model

HF-version of the QAmden model: Peek Across: Improving Multi-Document Modeling via Cross-Document Question-Answering (ACL 2023).

You can use it by

from transformers import (
    AutoTokenizer,
    LEDConfig,
    LEDForConditionalGeneration,
)
# load model and tokenizer
tokenizer = AutoTokenizer.from_pretrained('biu-nlp/QAmden')
config=LEDConfig.from_pretrained('biu-nlp/QAmden')

model = LEDForConditionalGeneration.from_pretrained('biu-nlp/QAmden')

The original repo is here.

If you find our work useful, please cite the paper as:

@article{caciularu2023peekacross,
  title={Peek Across: Improving Multi-Document Modeling via Cross-Document Question-Answering},
  author={Caciularu, Avi and Peters, Matthew E and Goldberger, Jacob and Dagan, Ido and Cohan, Arman},
  journal={The 61st Annual Meeting of the Association for Computational Linguistics: ACL 2023},
  year={2023}
}
Downloads last month
15
Safetensors
Model size
460M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.