Edit model card

Model Card for Fine-Tuned gemma-2-2b-it on IMDb Sentiment Analysis

Model Summary

This model is a fine-tuned version of google/gemma-2-2b-it using LoRA (Low-Rank Adaptation) for efficient parameter tuning. It was trained on the IMDb dataset for binary sentiment classification (positive and negative), optimized using 4-bit quantization (NF4) via BitsAndBytes for memory and computation efficiency.

You can find the model and its details on Hugging Face Hub here.

Model Details

Developed By:

This model was fine-tuned by [Your Name or Organization] using Hugging Face's peft and transformers libraries with the IMDb dataset for English sentiment analysis.

Model Type:

This is a transformer-based model for binary sentiment classification using the IMDb dataset.

Language:

  • Language(s): English (IMDb movie reviews)

License:

[Add relevant license here]

Finetuned From:

  • Base Model: google/gemma-2-2b-it

Framework Versions:

  • Transformers: 4.44.2
  • PEFT: 0.12.0
  • Datasets: 3.0.1
  • PyTorch: 2.4.1+cu121

Intended Uses & Limitations

Intended Use:

This model can be used to classify movie reviews as positive or negative. It's well-suited for tasks like review analysis, social media sentiment classification, or feedback systems.

Out-of-Scope Use:

The model may not perform well on tasks that require multi-class sentiment classification or text outside of the domain of English movie reviews.

Limitations:

  • Bias: Since the model is trained on IMDb data, it may reflect the dataset's biases and could be less accurate when applied to different domains or types of sentiment analysis.
  • Generalization: The model may not generalize well to other forms of text, such as product reviews or social media comments, without additional fine-tuning.

Model Architecture

Quantization:

The model leverages 4-bit quantization (NF4) using BitsAndBytes to make it more memory-efficient. This allows the model to be run on smaller hardware resources while maintaining competitive performance.

LoRA Configuration:

The model uses Low-Rank Adaptation (LoRA) to efficiently fine-tune a subset of parameters. The specific modules adapted include:

  • down_proj, gate_proj, q_proj, o_proj, up_proj, v_proj, k_proj.

The LoRA configuration is:

  • r = 16, lora_alpha = 32, lora_dropout = 0.05

Training Details

Dataset:

The model was trained on the IMDb dataset, which contains 50,000 labeled movie reviews, split into 25,000 training examples and 25,000 test examples. Each review is labeled as either positive or negative.

  • Train Set Size: 25,000 samples
  • Test Set Size: 25,000 samples
  • Classes: 2 (POSITIVE, NEGATIVE)

Preprocessing:

Text from IMDb reviews was tokenized using the google/gemma-2-2b-it tokenizer with a maximum sequence length of 64. The tokenization included padding and truncation to ensure consistent input lengths.

Hyperparameters:

  • Learning Rate: 2e-5
  • Batch Size (train): 8
  • Batch Size (eval): 8
  • Epochs: 5
  • Optimizer: AdamW (with 8-bit optimization)
  • Weight Decay: 0.01
  • Gradient Accumulation Steps: 2
  • Evaluation Steps: 1000
  • Logging Steps: 1000
  • 4-bit Quantization: Enabled (via BitsAndBytes)
  • Metric for Best Model: Accuracy

Evaluation

Metrics:

The model was evaluated on the IMDb test dataset using the following metrics:

  • Accuracy
  • F1 Score (weighted)
  • Precision
  • Recall

The model performs well in classifying movie reviews as positive or negative, achieving strong results across all metrics. Exact evaluation numbers will depend on the specific test runs and should be provided upon evaluation completion.

Code Example:

You can load the fine-tuned model and use it for inference on your own data using the code below:

from transformers import AutoTokenizer, AutoModelForSequenceClassification

# Load model and tokenizer
model = AutoModelForSequenceClassification.from_pretrained("pengsu/MLB-care-for-mind-eng")
tokenizer = AutoTokenizer.from_pretrained("pengsu/MLB-care-for-mind-eng")

# Tokenize input text
text = "This movie was absolutely amazing!"
inputs = tokenizer(text, return_tensors="pt", padding=True, truncation=True)

# Get predictions
outputs = model(**inputs)
logits = outputs.logits
predicted_class = logits.argmax(-1).item()

# Map prediction to label
id2label = {0: "NEGATIVE", 1: "POSITIVE"}
print(f"Predicted sentiment: {id2label[predicted_class]}")
Downloads last month
3
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for pengsu/MLB-care-for-mind-eng

Base model

google/gemma-2-2b
Adapter
(156)
this model