|
--- |
|
license: mit |
|
tags: |
|
- Keyphrase Generation |
|
--- |
|
|
|
# Usage |
|
|
|
```python |
|
!pip install KeyBartAdapter |
|
|
|
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM |
|
from models import KeyBartAdapter |
|
|
|
model = KeyBartAdapter.from_pretrained('Adapting/KeyBartAdapter', revision = '3aee5ecf1703b9955ab0cd1b23208cc54eb17fce',adapter_hid_dim =32) |
|
tokenizer = AutoTokenizer.from_pretrained("bloomberg/KeyBART") |
|
|
|
``` |
|
- adapter layer hd 512 init model: `e38c77df86e0e289e5846455e226f4e9af09ef8e` |
|
- adapter layer hd 256 init model: `c6f3b357d953dcb5943b6333a0f9f941b832477` |
|
- adapter layer hd 128 init model: `f88116fa1c995f07ccd5ad88862e0aa4f162b1ea` |
|
- adapter layer hd 64 init model: `f7e8c6323b8d5822667ddc066ffe19ac7b810f4a` |
|
- adapter layer hd 32 init model: `24ec15daef1670fb9849a56517a6886b69b652f6` |
|
|
|
**1. inference** |
|
|
|
``` |
|
from transformers import Text2TextGenerationPipeline |
|
pipe = Text2TextGenerationPipeline(model=model,tokenizer=tokenizer) |
|
|
|
abstract = '''Non-referential face image quality assessment methods have gained popularity as a pre-filtering step on face recognition systems. In most of them, the quality score is usually designed with face matching in mind. However, a small amount of work has been done on measuring their impact and usefulness on Presentation Attack Detection (PAD). In this paper, we study the effect of quality assessment methods on filtering bona fide and attack samples, their impact on PAD systems, and how the performance of such systems is improved when training on a filtered (by quality) dataset. On a Vision Transformer PAD algorithm, a reduction of 20% of the training dataset by removing lower quality samples allowed us to improve the BPCER by 3% in a cross-dataset test.''' |
|
|
|
pipe(abstract) |
|
``` |