iris@sunwood.ai.labs commited on
Commit
79802b4
ยท
2 Parent(s): efa4cbb 7c1715c

Merge pull request #9 from Sunwood-ai-labs/translate-readme-11556071483

Browse files
Files changed (1) hide show
  1. docs/README.en.md +19 -17
docs/README.en.md CHANGED
@@ -31,7 +31,7 @@ license: mit
31
  </p>
32
 
33
  <h2 align="center">
34
- ๏ฝž Llama Model Fine-tuning Experiment Environment ๏ฝž
35
  </h2>
36
 
37
  <p align="center">
@@ -44,38 +44,40 @@ license: mit
44
 
45
  ## ๐Ÿš€ Project Overview
46
 
47
- **Llama-finetune-sandbox** is an experimental environment for learning and verifying Llama model fine-tuning. You can try various fine-tuning methods, customize models, and evaluate performance. It caters to a wide range of users, from beginners to researchers. Version 0.1.0 includes a repository name change, significantly updated README, and added a Llama model fine-tuning tutorial.
48
 
49
- ## โœจ Key Features
50
 
51
- 1. **Various Fine-tuning Methods:**
52
  - LoRA (Low-Rank Adaptation)
53
  - QLoRA (Quantized LoRA)
54
  - โš ๏ธ~Full Fine-tuning~
55
  - โš ๏ธ~Parameter-Efficient Fine-tuning (PEFT)~
56
 
57
- 2. **Flexible Model Settings:**
58
  - Customizable maximum sequence length
59
  - Various quantization options
60
  - Multiple attention mechanisms
61
 
62
- 3. **Experimental Environment Setup:**
63
  - Performance evaluation tools
64
  - Memory usage optimization
65
  - Visualization of experimental results
66
 
 
67
 
68
- ## ๐Ÿ“š Implementation Examples
69
 
70
- This repository includes the following implementation examples:
 
 
 
 
71
 
72
- 1. **High-speed fine-tuning using Unsloth:**
73
- - Implementation of high-speed fine-tuning for Llama-3.2-1B/3B models.
74
- - โ†’ See [`Llama_3_2_1B+3B_Conversational_+_2x_faster_finetuning_JP.md`](sandbox/Llama_3_2_1B+3B_Conversational_+_2x_faster_finetuning_JP.md) for details.
75
- - โ†’ [Use this to convert from Markdown to Notebook format](https://huggingface.co/spaces/MakiAi/JupytextWebUI)
76
- - [๐Ÿ“’Notebook here](https://colab.research.google.com/drive/1AjtWF2vOEwzIoCMmlQfSTYCVgy4Y78Wi?usp=sharing)
77
-
78
- 2. Other implementation examples will be added periodically.
79
 
80
  ## ๐Ÿ› ๏ธ Environment Setup
81
 
@@ -85,9 +87,9 @@ git clone https://github.com/Sunwood-ai-labs/Llama-finetune-sandbox.git
85
  cd Llama-finetune-sandbox
86
  ```
87
 
88
- ## ๐Ÿ“ Adding Example Experiments
89
 
90
- 1. Add new implementations to the `examples/` directory.
91
  2. Add necessary settings and utilities to `utils/`.
92
  3. Update documentation and tests.
93
  4. Create a pull request.
 
31
  </p>
32
 
33
  <h2 align="center">
34
+ Llama Model Fine-tuning Experiment Environment
35
  </h2>
36
 
37
  <p align="center">
 
44
 
45
  ## ๐Ÿš€ Project Overview
46
 
47
+ **Llama-finetune-sandbox** provides an experimental environment for learning and verifying Llama model fine-tuning. You can try various fine-tuning methods, customize models, and evaluate their performance. It caters to a wide range of users, from beginners to researchers. Version 0.1.0 includes a repository name change, a major README update, and the addition of a Llama model fine-tuning tutorial.
48
 
49
+ ## โœจ Main Features
50
 
51
+ 1. **Diverse Fine-tuning Methods**:
52
  - LoRA (Low-Rank Adaptation)
53
  - QLoRA (Quantized LoRA)
54
  - โš ๏ธ~Full Fine-tuning~
55
  - โš ๏ธ~Parameter-Efficient Fine-tuning (PEFT)~
56
 
57
+ 2. **Flexible Model Configuration**:
58
  - Customizable maximum sequence length
59
  - Various quantization options
60
  - Multiple attention mechanisms
61
 
62
+ 3. **Well-Equipped Experiment Environment**:
63
  - Performance evaluation tools
64
  - Memory usage optimization
65
  - Visualization of experimental results
66
 
67
+ ## ๐Ÿ“š Examples
68
 
69
+ This repository includes the following examples:
70
 
71
+ ### High-Speed Fine-tuning using Unsloth
72
+ - High-speed fine-tuning implementation for Llama-3.2-1B/3B models
73
+ - โ†’ See [`Llama_3_2_1B+3B_Conversational_+_2x_faster_finetuning_JP.md`](sandbox/Llama_3_2_1B+3B_Conversational_+_2x_faster_finetuning_JP.md) for details.
74
+ - โ†’ [Use this to convert from Markdown to Notebook format](https://huggingface.co/spaces/MakiAi/JupytextWebUI)
75
+ - [๐Ÿ“’Notebook here](https://colab.research.google.com/drive/1AjtWF2vOEwzIoCMmlQfSTYCVgy4Y78Wi?usp=sharing)
76
 
77
+ ### Efficient Model Operation using Ollama and LiteLLM
78
+ - Setup and operation guide on Google Colab
79
+ - โ†’ See [`efficient-ollama-colab-setup-with-litellm-guide.md`](sandbox/efficient-ollama-colab-setup-with-litellm-guide.md) for details.
80
+ - [๐Ÿ“’Notebook here](https://colab.research.google.com/drive/1buTPds1Go1NbZOLlpG94VG22GyK-F4GW?usp=sharing)
 
 
 
81
 
82
  ## ๐Ÿ› ๏ธ Environment Setup
83
 
 
87
  cd Llama-finetune-sandbox
88
  ```
89
 
90
+ ## ๐Ÿ“ Adding Examples
91
 
92
+ 1. Add a new implementation to the `examples/` directory.
93
  2. Add necessary settings and utilities to `utils/`.
94
  3. Update documentation and tests.
95
  4. Create a pull request.