iris-s-coon commited on
Commit
7c1715c
·
1 Parent(s): efa4cbb

📖 [docs] 英語READMEの更新

Browse files
Files changed (1) hide show
  1. docs/README.en.md +19 -17
docs/README.en.md CHANGED
@@ -31,7 +31,7 @@ license: mit
31
  </p>
32
 
33
  <h2 align="center">
34
- Llama Model Fine-tuning Experiment Environment
35
  </h2>
36
 
37
  <p align="center">
@@ -44,38 +44,40 @@ license: mit
44
 
45
  ## 🚀 Project Overview
46
 
47
- **Llama-finetune-sandbox** is an experimental environment for learning and verifying Llama model fine-tuning. You can try various fine-tuning methods, customize models, and evaluate performance. It caters to a wide range of users, from beginners to researchers. Version 0.1.0 includes a repository name change, significantly updated README, and added a Llama model fine-tuning tutorial.
48
 
49
- ## ✨ Key Features
50
 
51
- 1. **Various Fine-tuning Methods:**
52
  - LoRA (Low-Rank Adaptation)
53
  - QLoRA (Quantized LoRA)
54
  - ⚠️~Full Fine-tuning~
55
  - ⚠️~Parameter-Efficient Fine-tuning (PEFT)~
56
 
57
- 2. **Flexible Model Settings:**
58
  - Customizable maximum sequence length
59
  - Various quantization options
60
  - Multiple attention mechanisms
61
 
62
- 3. **Experimental Environment Setup:**
63
  - Performance evaluation tools
64
  - Memory usage optimization
65
  - Visualization of experimental results
66
 
 
67
 
68
- ## 📚 Implementation Examples
69
 
70
- This repository includes the following implementation examples:
 
 
 
 
71
 
72
- 1. **High-speed fine-tuning using Unsloth:**
73
- - Implementation of high-speed fine-tuning for Llama-3.2-1B/3B models.
74
- - → See [`Llama_3_2_1B+3B_Conversational_+_2x_faster_finetuning_JP.md`](sandbox/Llama_3_2_1B+3B_Conversational_+_2x_faster_finetuning_JP.md) for details.
75
- - [Use this to convert from Markdown to Notebook format](https://huggingface.co/spaces/MakiAi/JupytextWebUI)
76
- - [📒Notebook here](https://colab.research.google.com/drive/1AjtWF2vOEwzIoCMmlQfSTYCVgy4Y78Wi?usp=sharing)
77
-
78
- 2. Other implementation examples will be added periodically.
79
 
80
  ## 🛠️ Environment Setup
81
 
@@ -85,9 +87,9 @@ git clone https://github.com/Sunwood-ai-labs/Llama-finetune-sandbox.git
85
  cd Llama-finetune-sandbox
86
  ```
87
 
88
- ## 📝 Adding Example Experiments
89
 
90
- 1. Add new implementations to the `examples/` directory.
91
  2. Add necessary settings and utilities to `utils/`.
92
  3. Update documentation and tests.
93
  4. Create a pull request.
 
31
  </p>
32
 
33
  <h2 align="center">
34
+ Llama Model Fine-tuning Experiment Environment
35
  </h2>
36
 
37
  <p align="center">
 
44
 
45
  ## 🚀 Project Overview
46
 
47
+ **Llama-finetune-sandbox** provides an experimental environment for learning and verifying Llama model fine-tuning. You can try various fine-tuning methods, customize models, and evaluate their performance. It caters to a wide range of users, from beginners to researchers. Version 0.1.0 includes a repository name change, a major README update, and the addition of a Llama model fine-tuning tutorial.
48
 
49
+ ## ✨ Main Features
50
 
51
+ 1. **Diverse Fine-tuning Methods**:
52
  - LoRA (Low-Rank Adaptation)
53
  - QLoRA (Quantized LoRA)
54
  - ⚠️~Full Fine-tuning~
55
  - ⚠️~Parameter-Efficient Fine-tuning (PEFT)~
56
 
57
+ 2. **Flexible Model Configuration**:
58
  - Customizable maximum sequence length
59
  - Various quantization options
60
  - Multiple attention mechanisms
61
 
62
+ 3. **Well-Equipped Experiment Environment**:
63
  - Performance evaluation tools
64
  - Memory usage optimization
65
  - Visualization of experimental results
66
 
67
+ ## 📚 Examples
68
 
69
+ This repository includes the following examples:
70
 
71
+ ### High-Speed Fine-tuning using Unsloth
72
+ - High-speed fine-tuning implementation for Llama-3.2-1B/3B models
73
+ - → See [`Llama_3_2_1B+3B_Conversational_+_2x_faster_finetuning_JP.md`](sandbox/Llama_3_2_1B+3B_Conversational_+_2x_faster_finetuning_JP.md) for details.
74
+ - → [Use this to convert from Markdown to Notebook format](https://huggingface.co/spaces/MakiAi/JupytextWebUI)
75
+ - [📒Notebook here](https://colab.research.google.com/drive/1AjtWF2vOEwzIoCMmlQfSTYCVgy4Y78Wi?usp=sharing)
76
 
77
+ ### Efficient Model Operation using Ollama and LiteLLM
78
+ - Setup and operation guide on Google Colab
79
+ - → See [`efficient-ollama-colab-setup-with-litellm-guide.md`](sandbox/efficient-ollama-colab-setup-with-litellm-guide.md) for details.
80
+ - [📒Notebook here](https://colab.research.google.com/drive/1buTPds1Go1NbZOLlpG94VG22GyK-F4GW?usp=sharing)
 
 
 
81
 
82
  ## 🛠️ Environment Setup
83
 
 
87
  cd Llama-finetune-sandbox
88
  ```
89
 
90
+ ## 📝 Adding Examples
91
 
92
+ 1. Add a new implementation to the `examples/` directory.
93
  2. Add necessary settings and utilities to `utils/`.
94
  3. Update documentation and tests.
95
  4. Create a pull request.