Spaces:
Sleeping
Sleeping
Merge pull request #9 from Sunwood-ai-labs/translate-readme-11556071483
Browse files- docs/README.en.md +19 -17
docs/README.en.md
CHANGED
@@ -31,7 +31,7 @@ license: mit
|
|
31 |
</p>
|
32 |
|
33 |
<h2 align="center">
|
34 |
-
|
35 |
</h2>
|
36 |
|
37 |
<p align="center">
|
@@ -44,38 +44,40 @@ license: mit
|
|
44 |
|
45 |
## ๐ Project Overview
|
46 |
|
47 |
-
**Llama-finetune-sandbox**
|
48 |
|
49 |
-
## โจ
|
50 |
|
51 |
-
1. **
|
52 |
- LoRA (Low-Rank Adaptation)
|
53 |
- QLoRA (Quantized LoRA)
|
54 |
- โ ๏ธ~Full Fine-tuning~
|
55 |
- โ ๏ธ~Parameter-Efficient Fine-tuning (PEFT)~
|
56 |
|
57 |
-
2. **Flexible Model
|
58 |
- Customizable maximum sequence length
|
59 |
- Various quantization options
|
60 |
- Multiple attention mechanisms
|
61 |
|
62 |
-
3. **
|
63 |
- Performance evaluation tools
|
64 |
- Memory usage optimization
|
65 |
- Visualization of experimental results
|
66 |
|
|
|
67 |
|
68 |
-
|
69 |
|
70 |
-
|
|
|
|
|
|
|
|
|
71 |
|
72 |
-
|
73 |
-
|
74 |
-
|
75 |
-
|
76 |
-
- [๐Notebook here](https://colab.research.google.com/drive/1AjtWF2vOEwzIoCMmlQfSTYCVgy4Y78Wi?usp=sharing)
|
77 |
-
|
78 |
-
2. Other implementation examples will be added periodically.
|
79 |
|
80 |
## ๐ ๏ธ Environment Setup
|
81 |
|
@@ -85,9 +87,9 @@ git clone https://github.com/Sunwood-ai-labs/Llama-finetune-sandbox.git
|
|
85 |
cd Llama-finetune-sandbox
|
86 |
```
|
87 |
|
88 |
-
## ๐ Adding
|
89 |
|
90 |
-
1. Add new
|
91 |
2. Add necessary settings and utilities to `utils/`.
|
92 |
3. Update documentation and tests.
|
93 |
4. Create a pull request.
|
|
|
31 |
</p>
|
32 |
|
33 |
<h2 align="center">
|
34 |
+
Llama Model Fine-tuning Experiment Environment
|
35 |
</h2>
|
36 |
|
37 |
<p align="center">
|
|
|
44 |
|
45 |
## ๐ Project Overview
|
46 |
|
47 |
+
**Llama-finetune-sandbox** provides an experimental environment for learning and verifying Llama model fine-tuning. You can try various fine-tuning methods, customize models, and evaluate their performance. It caters to a wide range of users, from beginners to researchers. Version 0.1.0 includes a repository name change, a major README update, and the addition of a Llama model fine-tuning tutorial.
|
48 |
|
49 |
+
## โจ Main Features
|
50 |
|
51 |
+
1. **Diverse Fine-tuning Methods**:
|
52 |
- LoRA (Low-Rank Adaptation)
|
53 |
- QLoRA (Quantized LoRA)
|
54 |
- โ ๏ธ~Full Fine-tuning~
|
55 |
- โ ๏ธ~Parameter-Efficient Fine-tuning (PEFT)~
|
56 |
|
57 |
+
2. **Flexible Model Configuration**:
|
58 |
- Customizable maximum sequence length
|
59 |
- Various quantization options
|
60 |
- Multiple attention mechanisms
|
61 |
|
62 |
+
3. **Well-Equipped Experiment Environment**:
|
63 |
- Performance evaluation tools
|
64 |
- Memory usage optimization
|
65 |
- Visualization of experimental results
|
66 |
|
67 |
+
## ๐ Examples
|
68 |
|
69 |
+
This repository includes the following examples:
|
70 |
|
71 |
+
### High-Speed Fine-tuning using Unsloth
|
72 |
+
- High-speed fine-tuning implementation for Llama-3.2-1B/3B models
|
73 |
+
- โ See [`Llama_3_2_1B+3B_Conversational_+_2x_faster_finetuning_JP.md`](sandbox/Llama_3_2_1B+3B_Conversational_+_2x_faster_finetuning_JP.md) for details.
|
74 |
+
- โ [Use this to convert from Markdown to Notebook format](https://huggingface.co/spaces/MakiAi/JupytextWebUI)
|
75 |
+
- [๐Notebook here](https://colab.research.google.com/drive/1AjtWF2vOEwzIoCMmlQfSTYCVgy4Y78Wi?usp=sharing)
|
76 |
|
77 |
+
### Efficient Model Operation using Ollama and LiteLLM
|
78 |
+
- Setup and operation guide on Google Colab
|
79 |
+
- โ See [`efficient-ollama-colab-setup-with-litellm-guide.md`](sandbox/efficient-ollama-colab-setup-with-litellm-guide.md) for details.
|
80 |
+
- [๐Notebook here](https://colab.research.google.com/drive/1buTPds1Go1NbZOLlpG94VG22GyK-F4GW?usp=sharing)
|
|
|
|
|
|
|
81 |
|
82 |
## ๐ ๏ธ Environment Setup
|
83 |
|
|
|
87 |
cd Llama-finetune-sandbox
|
88 |
```
|
89 |
|
90 |
+
## ๐ Adding Examples
|
91 |
|
92 |
+
1. Add a new implementation to the `examples/` directory.
|
93 |
2. Add necessary settings and utilities to `utils/`.
|
94 |
3. Update documentation and tests.
|
95 |
4. Create a pull request.
|