gemma_ft_quote / README.md
Eteims's picture
Made the README more descriptive.
71317bd verified
metadata
license: gemma
library_name: transformers
tags:
  - sft
  - generated_from_trainer
base_model: google/gemma-7b
model-index:
  - name: gemma_ft_quote
    results: []
pipeline_tag: text-generation
datasets:
  - Abirate/english_quotes
language:
  - en
widget:
  - text: 'Quote: With great power comes'
    example_title: Example 1
  - text: 'Quote: Hasta la vista baby'
    example_title: Example 2
  - text: 'Quote: Elementary, my dear watson.'
    example_title: Example 3

Gemma_ft_Quote

This model is a fine-tuned version of google/gemma-7b on the english quote dataset using LoRA. It is based on the example provided by google here. The notebook used to fine-tune the model can be found here

Model description

The model can complete popular quotes given to it and add the author of the quote. For example, Given the qoute below:

Quote: With great power comes

The model would complete the quote and add the author of the quote:

Quote: With great power comes great responsibility. Author: Ben Parker.

Given a complete Quoute the model would add the author:

Quote: I'll be back. Author: Arnold Schwarzenegger.

Usage

The model can be used with transformers library. Here's an example of loading the model in 4 bit quantization mode:

import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig

model_id = "Eteims/gemma_ft_quote"
bnb_config = BitsAndBytesConfig(
    load_in_4bit=True,
    bnb_4bit_quant_type="nf4",
    bnb_4bit_compute_dtype=torch.bfloat16
)

tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=bnb_config, device_map="cuda:0")

This code would easily run in a free colab tier.

After loading the model you can use it for inference:

text = "Quote: Elementary, my dear watson."
device = "cuda:0"
inputs = tokenizer(text, return_tensors="pt").to(device)

outputs = model.generate(**inputs, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Training hyperparameters

The following hyperparameters were used during fine-tuning:

  • learning_rate: 0.0002
  • train_batch_size: 1
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 4
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 2
  • training_steps: 10
  • mixed_precision_training: Native AMP

Framework versions

  • PEFT 0.8.2
  • Transformers 4.38.1
  • Pytorch 2.3.0+cu121
  • Datasets 2.17.0
  • Tokenizers 0.15.2