Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,65 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language:
|
3 |
+
- en
|
4 |
+
tags:
|
5 |
+
- update_summarization
|
6 |
+
datasets:
|
7 |
+
-
|
8 |
+
-
|
9 |
+
metrics:
|
10 |
+
- edit distance
|
11 |
+
- ROUGE
|
12 |
+
- BertScore
|
13 |
+
---
|
14 |
+
|
15 |
+
# Update Summarization with BART Large and Longformer Encoder Decoder
|
16 |
+
|
17 |
+
## Model description
|
18 |
+
|
19 |
+
This model is a Transformer-based model that supports long document generative sequence-to-sequence.
|
20 |
+
|
21 |
+
Based on [BART Large](https://huggingface.co/transformers/model_doc/bart.html) with [Longformer Encode Decoder](https://huggingface.co/transformers/model_doc/led.html) to allow for longer inputs.
|
22 |
+
|
23 |
+
## Intended uses & limitations
|
24 |
+
|
25 |
+
#### How to use
|
26 |
+
|
27 |
+
```python
|
28 |
+
from transformers import LEDTokenizer, LEDForConditionalGeneration
|
29 |
+
|
30 |
+
tokenizer = LEDTokenizer.from_pretrained("hyesunyun/update-summmarization-bart-large-longformer")
|
31 |
+
model = LEDForConditionalGeneration.from_pretrained("hyesunyun/update-summarization-bart-large-longformer").
|
32 |
+
|
33 |
+
input = "<EV> <t> Hypoglycemic effect of bitter melon compared with metformin in newly diagnosed type 2 diabetes patients. <abs> ETHNOPHARMACOLOGICAL RELEVANCE: Bitter melon (Momordica charantia L.) has been widely used as an traditional medicine treatment for diabetic patients in Asia. In vitro and animal studies suggested its hypoglycemic activity, but limited human studies are available to support its use. AIM OF STUDY: This study was conducted to assess the efficacy and safety of three doses of bitter melon compared with metformin. MATERIALS AND METHODS: This is a 4-week, multicenter, randomized, double-blind, active-control trial. Patients were randomized into 4 groups to receive bitter melon 500 mg/day, 1,000 mg/day, and 2,000 mg/day or metformin 1,000 mg/day. All patients were followed for 4 weeks. RESULTS: There was a significant decline in fructosamine at week 4 of the metformin group (-16.8; 95% CI, -31.2, -2.4 mumol/L) and the bitter melon 2,000 mg/day group (-10.2; 95% CI, -19.1, -1.3 mumol/L). Bitter melon 500 and 1,000 mg/day did not significantly decrease fructosamine levels (-3.5; 95% CI -11.7, 4.6 and -10.3; 95% CI -22.7, 2.2 mumol/L, respectively). CONCLUSIONS: Bitter melon had a modest hypoglycemic effect and significantly reduced fructosamine levels from baseline among patients with type 2 diabetes who received 2,000 mg/day. However, the hypoglycemic effect of bitter melon was less than metformin 1,000 mg/day. <EV> <t> Momordica charantia for type 2 diabetes mellitus. <abs> There is insufficient evidence to recommend momordica charantia for type 2 diabetes mellitus. Further studies are therefore required to address the issues of standardization and the quality control of preparations. For medical nutritional therapy, further observational trials evaluating the effects of momordica charantia are needed before RCTs are established to guide any recommendations in clinical practice."
|
34 |
+
inputs_dict = tokenizer(input, padding="max_length", max_length=10240, return_tensors="pt", truncation=True)
|
35 |
+
input_ids = inputs_dict.input_ids
|
36 |
+
attention_mask = inputs_dict.attention_mask
|
37 |
+
global_attention_mask = torch.zeros_like(attention_mask)
|
38 |
+
# put global attention on <s> token
|
39 |
+
global_attention_mask[:, 0] = 1
|
40 |
+
|
41 |
+
predicted_summary_ids = model.generate(input_ids, attention_mask=attention_mask, global_attention_mask=global_attention_mask)
|
42 |
+
print(tokenizer.batch_decode(predicted_summary_ids, skip_special_tokens=True))
|
43 |
+
```
|
44 |
+
|
45 |
+
#### Limitations and bias
|
46 |
+
|
47 |
+
Provide examples of latent issues and potential remediations.
|
48 |
+
|
49 |
+
## Training data
|
50 |
+
|
51 |
+
Used pre-trained [LED model](https://huggingface.co/transformers/model_doc/led.html) and fine-tuned using the dataset found in [this github repo](https://github.com/hyesunyun/update_summarization_data).
|
52 |
+
|
53 |
+
## Training procedure
|
54 |
+
|
55 |
+
Preprocessing, hardware used, hyperparameters...
|
56 |
+
|
57 |
+
## Eval results
|
58 |
+
|
59 |
+
### BibTeX entry and citation info
|
60 |
+
|
61 |
+
```bibtex
|
62 |
+
@inproceedings{...,
|
63 |
+
year={2021}
|
64 |
+
}
|
65 |
+
```
|