Lux-Llama
This repository contains a fine-tuned version of the Llama-3.1-8B-Instruct model, specifically adapted for Luxembourgish. The fine-tuning was performed using LoRA (Low-Rank Adaptation) on a dataset crafted to generate Chain-of-Thought (CoT) reasoning in Luxembourgish. The fine-tuning process utilized the computational resources provided by Meluxina, a high-performance computing (HPC) platform operated by LuxProvide.
Model Overview
- Base Model: Llama-3.1-8B-Instruct
- Fine-Tuning Method: LoRA (Low-Rank Adaptation)
- Dataset: Luxembourgish Chain-of-Thought (CoT) dataset
- Compute Platform: Meluxina by LuxProvide
- Fine-Tuning Framework: Unsloth
About Meluxina
Meluxina is Luxembourg's national supercomputer, launched in June 2021 by LuxProvide. It is built on the EVIDEN BullSequana XH2000 platform and provides:
- 18 PetaFlops of computing power.
- 20 PetaBytes of storage capacity.
- A scalable architecture integrating simulation, modeling, data analytics, and AI.
Meluxina was ranked 36th globally and recognized as the greenest supercomputer in the EU within the Top500 ranking. Named after Luxembourg's legend of the mermaid Melusina, it symbolizes digital innovation and employs water-cooling technology for energy efficiency.
Features
- Language: Luxembourgish
- Specialization: Chain-of-Thought reasoning for complex problem-solving and step-by-step explanations.
- Efficiency: LoRA fine-tuning ensures minimal computational overhead while maintaining high performance.
Installation
To use the fine-tuned model, ensure you have the following dependencies installed:
%%capture
!pip install unsloth
# Also get the latest nightly Unsloth!
!pip uninstall unsloth -y && pip install --upgrade --no-cache-dir --no-deps git+https://github.com/unslothai/unsloth.git
You can then load the model as follows:
from unsloth import FastLanguageModel
import torch
max_seq_length = 8192 # Choose any! We auto support RoPE Scaling internally!
dtype = None # None for auto detection. Float16 for Tesla T4, V100, Bfloat16 for Ampere+
load_in_4bit = True # Use 4bit quantization to reduce memory usage. Can be False.
model, tokenizer = FastLanguageModel.from_pretrained(
model_name = "aiplanet/Lux-Llama",
max_seq_length = max_seq_length,
dtype = dtype,
load_in_4bit = load_in_4bit,
)
FastLanguageModel.for_inference(model) # Enable native 2x faster inference
alpaca_prompt = """Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
### Instruction:
{}
### Input:
{}
### Response:
{}"""
inputs = tokenizer(
[
alpaca_prompt.format(
"You are a helpful chatbot. Your primary task is to provide clear and relevant responses to user queries. Always respond in Luxembourgish unless the user explicitly requests otherwise. You must NOT provide a translation for your reply.", # instruction
"Erklär mech Schrëtt fir Schrëtt, wéi ee Brout bäckt.", # input
"", # output - leave this blank for generation!
)
], return_tensors = "pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens = 512, repetition_penalty = 1.2)
# tokenizer.batch_decode(outputs, skip_special_tokens = True
tokenizer.batch_decode(outputs)
Output:
# Fir e purescht Brout ze baaken, brauchst du een Härzchen (ca 250g) an d'Zucker (50g). D'Recept kann och mat Käsekugeln oder Nüssen ergänzt ginn. Et ass wichteg, dass ee gutt nachbaket! Ee muss also den Ofen op 180°C schalten an de Brot am ëmmersten drënne Plätz vun der Ablage stellen. De Baackezäit hängt vu der Gréisst vum Bierg an dem Wénkel, dee ee beim Bauen benutzt. Fir en normale Brotpartee staavwouert et zirka 45-60 Minutten. Määnter net, ob ee dat selwer ausprobiieren wollet!
Dataset Details
The Luxembourgish Chain-of-Thought (CoT) dataset used for fine-tuning consists of:
- Examples: Handcrafted and curated examples of multi-step reasoning tasks.
- Domains: Mathematics, general knowledge, and real-life problem-solving scenarios.
- Format: Text sequences with labeled CoT reasoning in Luxembourgish.
Fine-Tuning Process
- Framework: The fine-tuning was conducted using Unsloth, a LoRA-based fine-tuning library.
- Steps:
- Initialized the Llama-3.1-8B-Instruct model.
- Applied LoRA adapters for efficient training.
- Fine-tuned using the Luxembourgish CoT dataset on the Meluxina HPC cluster.
- Hardware: High-performance A100 GPUs provided by Meluxina ensured rapid convergence.
Benchmarking
- Under progress
Acknowledgments
This work leverages computational resources and support from Meluxina by LuxProvide.
Model tree for aiplanet/Lux-Llama
Base model
meta-llama/Llama-3.1-8B