Edit model card

Llama-3.2-1B-Indonesian

This model is a fine-tuned version of meta-llama/Llama-3.2-1B-Instruct that has been optimized for Indonesian language understanding and generation.

The fine-tuning process utilized Low-Rank Adaptation (LoRA) to efficiently adapt the model while minimizing computational and storage overhead. This approach enables effective fine-tuning for specific tasks or domains, particularly in the Indonesian language context.

Training and evaluation data

Ichsan2895/alpaca-gpt4-indonesian

Use WIth Transformers

import torch
from transformers import pipeline

model_id = "digo-prayudha/Llama-3.2-1B-Indonesian"
pipe = pipeline(
    "text-generation",
    model=model_id,
    torch_dtype=torch.bfloat16,
    device_map="auto",
)
messages = [
    {"role": "user", "content": "Tentukan subjek dari kalimat berikut: 'Film tersebut dirilis kemarin'."},
]
outputs = pipe(
    messages,
    max_new_tokens=256,
)
print(outputs[0]["generated_text"][-1])

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0002
  • train_batch_size: 1
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 6
  • total_train_batch_size: 6
  • optimizer: Use adamw_torch_fused with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: constant
  • lr_scheduler_warmup_ratio: 0.03
  • num_epochs: 3
  • mixed_precision_training: Native AMP

Training results

![Train Loss]

Framework versions

  • PEFT 0.7.2.dev0
  • Transformers 4.46.1
  • Pytorch 2.4.0+cu121
  • Datasets 2.16.1
  • Tokenizers 0.20.1
Downloads last month
89
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for digo-prayudha/Llama-3.2-1B-Indonesian-lora

Adapter
(86)
this model