File size: 8,319 Bytes
f266bfb 64eb8dc f266bfb 64eb8dc f266bfb 64eb8dc f266bfb 64eb8dc f266bfb 64eb8dc f266bfb 64eb8dc f266bfb 64eb8dc f266bfb 64eb8dc f266bfb 64eb8dc f266bfb 64eb8dc f266bfb 64eb8dc f266bfb 64eb8dc f266bfb 64eb8dc f266bfb 64eb8dc f266bfb 64eb8dc f266bfb 64eb8dc f266bfb 64eb8dc f266bfb 64eb8dc f266bfb 64eb8dc f266bfb 64eb8dc f266bfb 64eb8dc f266bfb 64eb8dc f266bfb 64eb8dc f266bfb 64eb8dc f266bfb 64eb8dc f266bfb 64eb8dc f266bfb 64eb8dc f266bfb 64eb8dc f266bfb 64eb8dc f266bfb 64eb8dc f266bfb 64eb8dc f266bfb 64eb8dc f266bfb 64eb8dc f266bfb 64eb8dc f266bfb 64eb8dc |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 |
---
library_name: transformers
tags:
- unsloth
- trl
- sft
---
# Model Card for Fin-LLaMA 3.1 8B
This is the model card for **Fin-LLaMA 3.1 8B**, a fine-tuned version of LLaMA 3.1 trained specifically on financial news data. The model is built to generate coherent and relevant financial, economic, and business text responses. It also includes multiple quantized GGUF model formats for resource-efficient deployment.
## Model Details
### Model Description
The Fin-LLaMA 3.1 8B model was fine-tuned using the **Unsloth** library, employing LoRA adapters for efficient training, and is available in various quantized GGUF formats. The model is instruction-tuned to generate text in response to finance-related queries.
- **Developed by:** us4
- **Model type:** Transformer (LLaMA 3.1 architecture, 8B parameters)
- **Languages:** English
- **License:** [More Information Needed]
- **Fine-tuned from model:** LLaMA 3.1 8B
### Files and Formats
The repository contains multiple files, including safetensors and GGUF formats for different quantization levels. Below is the list of key files and their details:
- **`adapter_config.json`** (778 Bytes): Configuration for the adapter model.
- **`adapter_model.safetensors`** (5.54 GB): Adapter model in safetensors format.
- **`config.json`** (978 Bytes): Model configuration file.
- **`generation_config.json`** (234 Bytes): Generation configuration file for text generation.
- **`model-00001-of-00004.safetensors`** (4.98 GB): Part 1 of the model in safetensors format.
- **`model-00002-of-00004.safetensors`** (5.00 GB): Part 2 of the model in safetensors format.
- **`model-00003-of-00004.safetensors`** (4.92 GB): Part 3 of the model in safetensors format.
- **`model-00004-of-00004.safetensors`** (1.17 GB): Part 4 of the model in safetensors format.
- **`model-q4_0.gguf`** (4.66 GB): Quantized GGUF format (Q4_0).
- **`model-q4_k_m.gguf`** (4.92 GB): Quantized GGUF format (Q4_K_M).
- **`model-q5_k_m.gguf`** (5.73 GB): Quantized GGUF format (Q5_K_M).
- **`model-q8_0.gguf`** (8.54 GB): Quantized GGUF format (Q8_0).
- **`model.safetensors.index.json`** (24 KB): Index file for the safetensors model.
- **`special_tokens_map.json`** (454 Bytes): Special tokens mapping file.
- **`tokenizer.json`** (9.09 MB): Tokenizer configuration for the model.
- **`tokenizer_config.json`** (55.4 KB): Additional tokenizer settings.
- **`training_args.bin`** (5.56 KB): Training arguments used for fine-tuning.
### GGUF Formats and Usage
The GGUF formats are optimized for memory-efficient inference, especially for edge devices or deployment in low-resource environments. Here’s a breakdown of the quantized GGUF formats available:
- **Q4_0**: 4-bit quantized model for high memory efficiency with some loss in precision.
- **Q4_K_M**: 4-bit quantized with optimized configurations for maintaining precision.
- **Q5_K_M**: 5-bit quantized model balancing memory efficiency and accuracy.
- **Q8_0**: 8-bit quantized model for higher precision with a larger memory footprint.
**GGUF files available in the repository:**
- `model-q4_0.gguf` (4.66 GB)
- `model-q4_k_m.gguf` (4.92 GB)
- `model-q5_k_m.gguf` (5.73 GB)
- `model-q8_0.gguf` (8.54 GB)
To load and use these GGUF models for inference:
```python
from unsloth import FastLanguageModel
#
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="us4/fin-llama3.1-8b",
max_seq_length=2048,
load_in_4bit=True, # Set to False for Q8_0 format
quantization_method="q4_k_m" # Change to the required format (e.g., "q5_k_m" or "q8_0")
)
```
## Model Sources
- **Repository:** [Fin-LLaMA 3.1 8B on Hugging Face](https://huggingface.co/us4/fin-llama3.1-8b)
- **Paper:** [LLaMA: Open and Efficient Foundation Language Models](https://arxiv.org/abs/2302.13971)
## Uses
The Fin-LLaMA 3.1 8B model is designed for generating business, financial, and economic-related text.
### Direct Use
The model can be directly used for text generation tasks, such as generating financial news summaries, analysis, or responses to finance-related prompts.
### Downstream Use
The model can be further fine-tuned for specific financial tasks, such as question-answering systems, summarization of financial reports, or automation of business processes.
### Out-of-Scope Use
The model is not suited for use in domains outside of finance, such as medical or legal text generation, nor should it be used for tasks that require deep financial forecasting or critical decision-making without human oversight.
## Bias, Risks, and Limitations
The model may inherit biases from the financial news data it was trained on. Since financial reporting can be region-specific and company-biased, users should exercise caution when applying the model in various international contexts.
### Recommendations
Users should carefully evaluate the generated text in critical business or financial settings. Ensure the generated content aligns with local regulations and company policies.
## Training Details
### Training Data
The model was fine-tuned on a dataset of financial news articles, consisting of titles and content from various financial media sources. The dataset has been pre-processed to remove extraneous information and ensure consistency across financial terms.
### Training Procedure
#### Preprocessing
The training data was tokenized using the LLaMA tokenizer, with prompts formatted to include both the title and content of financial news articles.
#### Training Hyperparameters
- **Training regime:** Mixed precision (FP16), gradient accumulation steps: 8, max steps: 500.
- **Learning Rate:** 5e-5 for fine-tuning, 1e-5 for embeddings.
- **Batch size:** 8 per device.
#### Speeds, Sizes, Times
The model training took place over approximately 500 steps on an A100 GPU. Checkpoint files range from 4.98 GB to 8.54 GB depending on quantization.
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
The model was tested on unseen financial news articles from the same source domains as the training set.
#### Factors
Evaluation focused on the model’s ability to generate coherent financial summaries and responses.
#### Metrics
Common text-generation metrics such as perplexity, accuracy in summarization, and human-in-the-loop evaluations were used.
### Results
The model demonstrated strong performance in generating high-quality financial text. It maintained coherence over long sequences and accurately represented financial data from the prompt.
## Model Examination
No interpretability techniques have yet been applied to this model, but explainability is under consideration for future versions.
## Environmental Impact
Training carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute).
- **Hardware Type:** A100 GPU
- **Hours used:** Approximately 72 hours for fine-tuning
- **Cloud Provider:** AWS
- **Compute Region:** US-East
- **Carbon Emitted:** Estimated at 43 kg of CO2eq
## Technical Specifications
### Model Architecture and Objective
The Fin-LLaMA 3.1 8B model is based on the LLaMA 3.1 architecture and uses LoRA adapters to efficiently fine-tune the model on financial data.
### Compute Infrastructure
The model was trained on A100 GPUs using PyTorch and the Hugging Face 🤗 Transformers library.
#### Hardware
- **GPU:** A100 (80GB)
- **Storage Requirements:** Around 20GB for the fine-tuned checkpoints, depending on quantization format.
#### Software
- **Library:** Hugging Face Transformers, Unsloth, PyTorch, PEFT
- **Version:** Unsloth v1.0, PyTorch 2.0, Hugging Face Transformers 4.30.0
## Citation
If you use this model in your research or applications, please consider citing:
**BibTeX:**
```
@article{touvron2023llama,
title={LLaMA: Open and Efficient Foundation Language Models},
author={Touvron, Hugo and others},
journal={arXiv preprint arXiv:2302.13971},
year={2023}
}
@misc{us4_fin_llama3_1,
title={Fin-LLaMA 3.1 8B - Fine-tuned on Financial News},
author={us4},
year={2024},
howpublished={\url{https://huggingface.co/us4/fin-llama3.1-8b}},
}
```
## More Information
For any additional information, please refer to the repository or contact the authors via the Hugging Face Hub.
## Model Card Contact
[More Information Needed]
|