souvik0306's picture
Update README.md
75e4b63
|
raw
history blame
1.39 kB
metadata
library_name: peft
tags:
  - code
  - gpt2
datasets:
  - garage-bAInd/Open-Platypus
base_model: gpt2
license: apache-2.0

Finetuning Overview:

Model Used: gpt2

Dataset: garage-bAInd/Open-Platypus

Dataset Insights:

garage-bAInd/Open-Platypus dataset is focused on improving LLM logical reasoning skills and was used to train the Platypus2 models. It is comprised of the following datasets, which were filtered using keyword search and then Sentence Transformers to remove questions with a similarity above 80%

Finetuning Details:

With the utilization of MonsterAPI's LLM finetuner, this finetuning:

  • Was achieved with great cost-effectiveness.
  • Completed in a total duration of 5m 18s for 1 epoch using an A6000 48GB GPU.
  • Costed $0.168 for the entire epoch.

Hyperparameters & Additional Details:

  • Epochs: 1
  • Cost Per Epoch: $0.168
  • Total Finetuning Cost: $0.168
  • Model Path: gpt2
  • Learning Rate: 0.0002
  • Data Split: 100% train
  • Gradient Accumulation Steps: 4
  • lora r: 32
  • lora alpha: 64

Train loss :

training loss

license: apache-2.0