|
--- |
|
language: en |
|
tags: |
|
- AMRBART |
|
license: mit |
|
--- |
|
|
|
## AMRBART-large-finetuned-AMR2.0-AMR2Text |
|
|
|
This model is a fine-tuned version of [AMRBART-large](https://huggingface.co/xfbai/AMRBART-large) on an AMR2.0 dataset. It achieves a sacre-bleu score of 45.7 on the evaluation set: More details are introduced in the paper: [Graph Pre-training for AMR Parsing and Generation](https://arxiv.org/pdf/2203.07836.pdf) by bai et al. in ACL 2022. |
|
|
|
## Model description |
|
Same with AMRBART. |
|
|
|
## Training data |
|
|
|
The model is finetuned on [AMR2.0](https://catalog.ldc.upenn.edu/LDC2020T02), a dataset consisting of 36,521 |
|
training instances, 1,368 validation instances, and 1,371 test instances. |
|
|
|
## Intended uses & limitations |
|
|
|
You can use the model for AMR-to-text generation, but it's mostly intended to be used in the domain of News. |
|
|
|
## How to use |
|
Here is how to initialize this model in PyTorch: |
|
|
|
```python |
|
from transformers import BartForConditionalGeneration |
|
model = BartForConditionalGeneration.from_pretrained("xfbai/AMRBART-large-finetuned-AMR2.0-AMR2Text") |
|
``` |
|
Please refer to [this repository](https://github.com/muyeby/AMRBART) for tokenizer initialization and data preprocessing. |
|
|
|
|
|
## BibTeX entry and citation info |
|
Please cite this paper if you find this model helpful |
|
|
|
```bibtex |
|
@inproceedings{bai-etal-2022-graph, |
|
title = "Graph Pre-training for {AMR} Parsing and Generation", |
|
author = "Bai, Xuefeng and |
|
Chen, Yulong and |
|
Zhang, Yue", |
|
booktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", |
|
month = may, |
|
year = "2022", |
|
address = "Online", |
|
publisher = "Association for Computational Linguistics", |
|
url = "todo", |
|
doi = "todo", |
|
pages = "todo" |
|
} |
|
``` |