File size: 2,934 Bytes
da91b1b 3904ca4 5f5bf8c ecd2d2f 5f5bf8c f23be63 94b7b75 25272f0 5f5bf8c 85d2cd3 5f5bf8c f854f89 5f5bf8c 79ea4d2 ff8d509 79ea4d2 ff8d509 5f5bf8c e842e68 5f5bf8c 25272f0 5f5bf8c |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 |
---
license: cc-by-nc-2.0
datasets:
- cosimoiaia/Loquace-102k
language:
- it
pipeline_tag: conversational
tags:
- alpaca
- llama
- llm
- finetune
- Italian
- qlora
---
Model Card for Loquace-410m
# ๐ฎ๐น Loquace-410m ๐ฎ๐น
An exclusively Italian speaking, instruction finetuned, Large Language model. ๐ฎ๐น
The Loquace Italian LLM models are created as a proof-of-concept to evaluate on how language tuning can be achieved using QLoRa by instruct-tunings foundational LLMs
using dataset of a specific language.
The QLoRa (https://github.com/artidoro/qlora) method of fine-tuning significantly lower the resources requirements compared to any other methods available,
this allow to easily execute the process on significanly larger dataset while still using consumers GPUs and still achieve high accuracy.
## Model Description
Loquace-410m is the second smallest model of the Loquace family. It was trained using QLoRa on a large dataset of 102k question/answer pairs
exclusively in Italian using pythia-410m as base.
The related code can be found at: https://github.com/cosimoiaia/Loquace
Loquace-410m is part of the big Loquace family:
https://huggingface.co/cosimoiaia/Loquace-70m - Based on pythia-70m
https://huggingface.co/cosimoiaia/Loquace-410m - Based on pythia-410m
https://huggingface.co/cosimoiaia/Loquace-7B - Based on Falcon-7B.
https://huggingface.co/cosimoiaia/Loquace-12B - Based on pythia-12B
https://huggingface.co/cosimoiaia/Loquace-20B - Based on gpt-neox-20B
## Usage
```python
from transformers import (
AutoTokenizer,
AutoModelForCausalLM,
BitsAndBytesConfig
)
tokenizer = AutoTokenizer.from_pretrained("cosimoiaia/Loquace-410m", padding_side="right", use_fast=True)
model = AutoModelForCausalLM.from_pretrained(
"cosimoiaia/Loquace-410m",
load_in_8bit=True,
device_map="auto",
quantization_config=BitsAndBytesConfig(
load_in_4bit=True,
llm_int8_has_fp16_weight=False
)
)
```
## Training
Loquace-410m was trained on a conversational dataset comprising 102k question/answer pairs in Italian language.
The training data was constructed by putting together translations from the original alpaca Dataset and other sources like the OpenAssistant dataset.
The model was trained for only 10000 iterations and took 9 hours on a single RTX 3090, kindly provided by Genesis Cloud. (https://gnsiscld.co/26qhlf)
## Limitations
- Loquace-410m may not handle complex or nuanced queries well and may struggle with ambiguous or poorly formatted inputs.
- The model may generate responses that are factually incorrect or nonsensical. It should be used with caution, and outputs should be carefully verified.
- The training data primarily consists of conversational examples and may not generalize well to other types of tasks or domains.
## Dependencies
- PyTorch
- Transformers library by Hugging Face
- Bitsandbites
- QLoRa
|