Model description
This model is a fine-tuned version of facebook/bart-large on a dataset in the hub called sunhaozhepy/ag_news_keywords_embeddings to extract main keywords from text. It achieves the following results on the evaluation set:
- Loss: 0.6179
Intended use
from transformers import pipeline
pipe = pipeline('summarization', model='bart_keywords_model')
print(pipe("Aria Opera GPT version - All the browsers come with their own version of AI. So I gave it a try and ask it with LLM it was using. First if all it didn't understand the question. Then I explained and asked which version. I got the usual answer about a language model that is not aware of it's own model I find that curious, but also not transparent. My laptop, software all state their versions and critical information. But something that can easily fool a lot of people doesn't. What I also wonder if the general public will be stuck to ChatGPT 3.5 for ever while better models are behind expensive paywalls."))
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
0.7701 | 0.57 | 500 | 0.7390 |
0.5804 | 1.14 | 1000 | 0.7056 |
0.5395 | 1.71 | 1500 | 0.6811 |
0.4036 | 2.28 | 2000 | 0.6504 |
0.3763 | 2.85 | 2500 | 0.6179 |
Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
- Downloads last month
- 99
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for ilsilfverskiold/bart-keyword-extractor
Base model
facebook/bart-large