artificialguybr's picture
Upload README.md with huggingface_hub
211d86c verified
metadata
language:
  - en
  - de
  - fr
  - it
  - pt
  - hi
  - es
  - th
library_name: transformers
pipeline_tag: text-generation
license: llama3.2
base_model: NousResearch/Llama-3.2-1B
tags:
  - generated_from_trainer
  - facebook
  - meta
  - pytorch
  - llama
  - llama-3
model-index:
  - name: llama3.2-1b-synthia-i
    results: []

Llama 3.2 1B - Synthia-v1.5-I - Redmond - Fine-tuned Model

This model is a fine-tuned version of NousResearch/Llama-3.2-1B on the Synthia-v1.5-I dataset.

Thanks RedmondAI for all the GPU Support!

Model Description

The base model is Llama 3.2 1B, a multilingual large language model developed by Meta. This version has been fine-tuned on the Synthia-v1.5-I instruction dataset to improve its instruction-following capabilities.

Training Data

The model was fine-tuned on Synthia-v1.5-I, which contains:

  • 20.7k training examples

Training Procedure

The model was trained with the following hyperparameters:

  • Learning rate: 2e-05
  • Train batch size: 1
  • Eval batch size: 1
  • Seed: 42
  • Gradient accumulation steps: 8
  • Total train batch size: 8
  • Optimizer: Paged AdamW 8bit (betas=(0.9,0.999), epsilon=1e-08)
  • LR scheduler: Cosine with 100 warmup steps
  • Number of epochs: 3

Framework Versions

  • Transformers 4.46.1
  • Pytorch 2.3.1+cu121
  • Datasets 3.0.1
  • Tokenizers 0.20.3

Intended Use

This model is intended for:

  • Instruction following tasks
  • Conversational AI applications
  • Research and development in natural language processing

Training Infrastructure

The model was trained using the Axolotl framework version 0.5.0.

License

This model is subject to the Llama 3.2 Community License Agreement. Users must comply with all terms and conditions specified in the license.

Built with Axolotl