Generate Title using Keywords
Title Generator is an online tool that helps you create great titles for your content. By entering specific keywords or information about content, you receive topic suggestions that increase content appeal.
Developed by https://exnrt.com
- Fine Tuned: T5-Base
- Parameters: 223M
- Train Dataset Length: 10,000
- Validation Dataset Length: 2000
- Batch Size: 1
- Epochs: 2
- Train Loss: 1.6578
- Validation Loss: 1.8115
You can also use t5-small
(77M params) available in mini folder.
How to use
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
device = "cuda"
tokenizer = AutoTokenizer.from_pretrained("Ateeqq/keywords-title-generator", token='your_token')
model = AutoModelForSeq2SeqLM.from_pretrained("Ateeqq/keywords-title-generator", token='your_token').to(device)
def generate_title(keywords):
input_ids = tokenizer(keywords, return_tensors="pt", padding="longest", truncation=True, max_length=24).input_ids.to(device)
outputs = model.generate(
input_ids,
num_beams=5,
num_beam_groups=5,
num_return_sequences=5,
repetition_penalty=10.0,
diversity_penalty=3.0,
no_repeat_ngram_size=2,
temperature=0.7,
max_length=24
)
return tokenizer.batch_decode(outputs, skip_special_tokens=True)
keywords = 'model, Fine-tuning, Machine Learning'
generate_title(keywords)
Output:
['How to Fine-tune Your Machine Learning Model for Better Performance',
'Fine-tuning your Machine Learning model with a simple technique',
'Using fine tuning to fine-tune your machine learning model',
'Machine Learning: Fine-tuning your model to fit the needs of machine learning',
'The Art of Fine-Tuning Your Machine Learning Model']
Disclaimer:
It grants a non-exclusive, non-transferable license to use the this model. This means you can't freely share it with others or sell the model itself. However you can use the model for commercial purposes.
- Downloads last month
- 7
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.