Update README.md
Browse files
README.md
CHANGED
@@ -5,11 +5,10 @@ language:
|
|
5 |
pipeline_tag: summarization
|
6 |
---
|
7 |
# FRED-T5 1.7B Summarizer
|
8 |
-
Dataset detail, model training and metrics in our paper: TUT_Dobavit
|
9 |
|
10 |
The model was trained by [SberDevices](https://sberdevices.ru/).
|
11 |
|
12 |
-
The model is trained on a mix of open summarisation data for the Russian language and use prefix tokenen '\<LM\>'
|
13 |
|
14 |
## Usage
|
15 |
```python
|
@@ -45,12 +44,11 @@ print(tokenizer.decode(outputs[0][1:]))
|
|
45 |
|
46 |
# Cite us
|
47 |
```
|
48 |
-
@misc{
|
49 |
-
title={
|
50 |
-
author={
|
51 |
-
year={
|
52 |
-
eprint={2309.10931},
|
53 |
-
archivePrefix={arXiv},
|
54 |
-
primaryClass={cs.CL}
|
55 |
}
|
56 |
-
```
|
|
|
|
|
|
5 |
pipeline_tag: summarization
|
6 |
---
|
7 |
# FRED-T5 1.7B Summarizer
|
|
|
8 |
|
9 |
The model was trained by [SberDevices](https://sberdevices.ru/).
|
10 |
|
11 |
+
The model is trained on a mix of open summarisation data [RussianNLP/Mixed-Summarization-Dataset](https://huggingface.co/datasets/RussianNLP/Mixed-Summarization-Dataset) for the Russian language and use prefix tokenen '\<LM\>'
|
12 |
|
13 |
## Usage
|
14 |
```python
|
|
|
44 |
|
45 |
# Cite us
|
46 |
```
|
47 |
+
@misc{akhmetgareeva2024summary,
|
48 |
+
title={Towards Russian Summarization: can architecture solve data limitations problems?},
|
49 |
+
author={Albina Akhmetgareeva and Ilia Kuleshov and Vlad Leschuk and Alexander Abramov and Alena Fenogenova},
|
50 |
+
year={2024},
|
|
|
|
|
|
|
51 |
}
|
52 |
+
```
|
53 |
+
|
54 |
+
|