File size: 903 Bytes
7e2f082
 
 
 
 
 
 
 
 
8947be6
 
 
 
 
efd03ae
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
---
title: README
emoji: 🌍
colorFrom: green
colorTo: gray
sdk: static
pinned: false
---

# stacked summaries


This organization exists to test and evaluate the (_potential_) benefits of "task-oriented pretraining" as popularized by the [FLAN-t5](https://huggingface.co/google/flan-t5-base) series of models 

## mission statement

The idea is to apply a similar concept but adjusted to be more specific w.r.t. the summarization task. Hopefully, this will train models that actually "know" how to condense and distill meaningful information from text rather than learning some naive style transfer of _"this is what the dataset summaries sound like so I will do that with essential words."_ 

The most apparent augmentation/task is "stacking" summaries that are shorter than `MAX_LENGTH_TOKENS` when combined, so the model has to learn to separate and group summaries for these independent concepts.