Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
pszemraj
/
led-base-book-summary
like
58
Summarization
Transformers
PyTorch
Safetensors
kmfoda/booksum
led
text2text-generation
summary
longformer
booksum
long-document
long-form
Eval Results
Inference Endpoints
License:
apache-2.0
License:
bsd-3-clause
Model card
Files
Files and versions
Community
13
Train
Deploy
Use this model
refs/pr/4
led-base-book-summary
3 contributors
History:
31 commits
lewtun
HF staff
Add evaluation results on the 3.0.0 config of cnn_dailymail
b1bd44d
over 2 years ago
.gitattributes
1.17 kB
update checkpoint to one further fine-tuned for 7 more epochs
over 2 years ago
.gitignore
13 Bytes
update checkpoint to one further fine-tuned for 7 more epochs
over 2 years ago
README.md
17.3 kB
Add evaluation results on the 3.0.0 config of cnn_dailymail
over 2 years ago
config.json
1.32 kB
add new checkpoint with 10 more epochs
over 2 years ago
merges.txt
456 kB
add tokenizer
almost 3 years ago
pytorch_model.bin
pickle
Detected Pickle imports (3)
"collections.OrderedDict"
,
"torch._utils._rebuild_tensor_v2"
,
"torch.FloatStorage"
What is a pickle import?
648 MB
LFS
add new checkpoint with 10 more epochs
over 2 years ago
special_tokens_map.json
772 Bytes
add tokenizer
almost 3 years ago
tokenizer.json
2.11 MB
update checkpoint to one further fine-tuned for 7 more epochs
over 2 years ago
tokenizer_config.json
1.32 kB
add new checkpoint with 10 more epochs
over 2 years ago
trainer_state.json
13.7 kB
update checkpoint to one further fine-tuned for 7 more epochs
over 2 years ago
training_args.bin
pickle
Detected Pickle imports (8)
"transformers.trainer_utils.HubStrategy"
,
"transformers.trainer_utils.IntervalStrategy"
,
"torch.device"
,
"transformers.training_args_seq2seq.Seq2SeqTrainingArguments"
,
"transformers.deepspeed.HfTrainerDeepSpeedConfig"
,
"transformers.training_args.OptimizerNames"
,
"torch.float16"
,
"transformers.trainer_utils.SchedulerType"
How to fix it?
4.4 kB
LFS
add new checkpoint with 10 more epochs
over 2 years ago
vocab.json
798 kB
add tokenizer
almost 3 years ago