Update LMFlow support
#6
by
shizhediao2
- opened
README.md
CHANGED
@@ -111,6 +111,41 @@ print(f"Model response: {response}")
|
|
111 |
|
112 |
```
|
113 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
114 |
## Evaluation
|
115 |
We use [`LM Evaluation Harness`](https://github.com/EleutherAI/lm-evaluation-harness) to evaluate the model. The evaluation commands are as follows:
|
116 |
|
|
|
111 |
|
112 |
```
|
113 |
|
114 |
+
## Finetuning Hymba
|
115 |
+
|
116 |
+
|
117 |
+
[LMFlow](https://github.com/OptimalScale/LMFlow) is a complete pipeline for fine-tuning large language models.
|
118 |
+
The following steps provide an example of how to fine-tune the `Hymba-1.5B-Base` models using LMFlow.
|
119 |
+
|
120 |
+
1. Using Docker
|
121 |
+
|
122 |
+
```
|
123 |
+
docker pull ghcr.io/tilmto/hymba:v1
|
124 |
+
docker run --gpus all -v /home/$USER:/home/$USER -it ghcr.io/tilmto/hymba:v1 bash
|
125 |
+
```
|
126 |
+
2. Install LMFlow
|
127 |
+
|
128 |
+
```
|
129 |
+
git clone https://github.com/OptimalScale/LMFlow.git
|
130 |
+
cd LMFlow
|
131 |
+
conda create -n lmflow python=3.9 -y
|
132 |
+
conda activate lmflow
|
133 |
+
conda install mpi4py
|
134 |
+
pip install -e .
|
135 |
+
```
|
136 |
+
|
137 |
+
3. Fine-tune the model using the following command.
|
138 |
+
|
139 |
+
```
|
140 |
+
cd LMFlow
|
141 |
+
bash ./scripts/run_finetune_hymba.sh
|
142 |
+
```
|
143 |
+
|
144 |
+
With LMFlow, you can also fine-tune the model on your custom dataset. The only thing you need to do is transform your dataset into the [LMFlow data format](https://optimalscale.github.io/LMFlow/examples/DATASETS.html).
|
145 |
+
In addition to full-finetuniing, you can also fine-tune hymba efficiently with [DoRA](https://arxiv.org/html/2402.09353v4), [LoRA](https://github.com/OptimalScale/LMFlow?tab=readme-ov-file#lora), [LISA](https://github.com/OptimalScale/LMFlow?tab=readme-ov-file#lisa), [Flash Attention](https://github.com/OptimalScale/LMFlow/blob/main/readme/flash_attn2.md), and other acceleration techniques.
|
146 |
+
For more details, please refer to the [LMFlow for Hymba](https://github.com/OptimalScale/LMFlow/tree/main/experimental/Hymba) documentation.
|
147 |
+
|
148 |
+
|
149 |
## Evaluation
|
150 |
We use [`LM Evaluation Harness`](https://github.com/EleutherAI/lm-evaluation-harness) to evaluate the model. The evaluation commands are as follows:
|
151 |
|