Update README.md
Browse files
README.md
CHANGED
@@ -1,274 +1,9 @@
|
|
1 |
-
|
2 |
-
|
3 |
-
|
4 |
-
|
5 |
-
|
6 |
-
|
7 |
-
|
8 |
-
|
9 |
-
|
10 |
-
<a href="https://user-images.githubusercontent.com/1069138/233859311-32aa1f8c-4d68-47ac-8cd9-9313171ff9f9.png"><img width="50%" alt="home" src="https://user-images.githubusercontent.com/1069138/233859311-32aa1f8c-4d68-47ac-8cd9-9313171ff9f9.png"></a><a href="https://user-images.githubusercontent.com/1069138/233859315-e6928aa7-28d2-420b-8366-bc7323c368ca.png"><img width="50%" alt="logs" src="https://user-images.githubusercontent.com/1069138/233859315-e6928aa7-28d2-420b-8366-bc7323c368ca.png"></a>
|
11 |
-
|
12 |
-
## Jump to
|
13 |
-
|
14 |
-
- [With H2O LLM Studio, you can](#with-h2o-llm-studio-you-can)
|
15 |
-
- [Quickstart](#quickstart)
|
16 |
-
- [What's New](#whats-new)
|
17 |
-
- [Setup](#setup)
|
18 |
-
- [Recommended Install](#recommended-install)
|
19 |
-
- [Using requirements.txt](#using-requirementstxt)
|
20 |
-
- [Run H2O LLM Studio GUI](#run-h2o-llm-studio-gui)
|
21 |
-
- [Run H2O LLM Studio GUI using Docker from a nightly build](#run-h2o-llm-studio-gui-using-docker-from-a-nightly-build)
|
22 |
-
- [Run H2O LLM Studio GUI by building your own Docker image](#run-h2o-llm-studio-gui-by-building-your-own-docker-image)
|
23 |
-
- [Run H2O LLM Studio with command line interface (CLI)](#run-h2o-llm-studio-with-command-line-interface-cli)
|
24 |
-
- [Data format and example data](#data-format-and-example-data)
|
25 |
-
- [Training your model](#training-your-model)
|
26 |
-
- [Example: Run on OASST data via CLI](#example-run-on-oasst-data-via-cli)
|
27 |
-
- [Model checkpoints](#model-checkpoints)
|
28 |
-
- [Documentation](#documentation)
|
29 |
-
- [Contributing](#contributing)
|
30 |
-
- [License](#license)
|
31 |
-
|
32 |
-
## With H2O LLM Studio, you can
|
33 |
-
|
34 |
-
- easily and effectively fine-tune LLMs **without the need for any coding experience**.
|
35 |
-
- use a **graphic user interface (GUI)** specially designed for large language models.
|
36 |
-
- finetune any LLM using a large variety of hyperparameters.
|
37 |
-
- use recent finetuning techniques such as [Low-Rank Adaptation (LoRA)](https://arxiv.org/abs/2106.09685) and 8-bit model training with a low memory footprint.
|
38 |
-
- use Reinforcement Learning (RL) to finetune your model (experimental)
|
39 |
-
- use advanced evaluation metrics to judge generated answers by the model.
|
40 |
-
- track and compare your model performance visually. In addition, [Neptune](https://neptune.ai/) integration can be used.
|
41 |
-
- chat with your model and get instant feedback on your model performance.
|
42 |
-
- easily export your model to the [Hugging Face Hub](https://huggingface.co/) and share it with the community.
|
43 |
-
|
44 |
-
## Quickstart
|
45 |
-
|
46 |
-
For questions, discussing, or just hanging out, come and join our [Discord](https://discord.gg/WKhYMWcVbq)!
|
47 |
-
|
48 |
-
We offer several ways of getting started quickly.
|
49 |
-
|
50 |
-
Using CLI for fine-tuning LLMs:
|
51 |
-
|
52 |
-
[![Kaggle](https://kaggle.com/static/images/open-in-kaggle.svg)](https://www.kaggle.com/code/ilu000/h2o-llm-studio-cli/) [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1soqfJjwDJwjjH-VzZYO_pUeLx5xY4N1K?usp=sharing)
|
53 |
-
|
54 |
-
## What's New
|
55 |
-
|
56 |
-
- [PR 592](https://github.com/h2oai/h2o-llmstudio/pull/599) Added `KTOPairLoss` for DPO modeling allowing to train models with simple preference data. Data currently needs to be manually prepared by randomly matching positive and negative examples as pairs.
|
57 |
-
- [PR 592](https://github.com/h2oai/h2o-llmstudio/pull/592) Starting to deprecate RLHF in favor of DPO/IPO optimization. Training is disabled, but old experiments are still viewable. RLHF will be fully removed in a future release.
|
58 |
-
- [PR 530](https://github.com/h2oai/h2o-llmstudio/pull/530) Introduced a new problem type for DPO/IPO optimization. This optimization technique can be used as an alternative to RLHF.
|
59 |
-
- [PR 288](https://github.com/h2oai/h2o-llmstudio/pull/288) Introduced Deepspeed for sharded training allowing to train larger models on machines with multiple GPUs. Requires NVLink. This feature replaces FSDP and offers more flexibility. Deepspeed requires a system installation of cudatoolkit and we recommend using version 11.8. See [Recommended Install](#recommended-install).
|
60 |
-
- [PR 449](https://github.com/h2oai/h2o-llmstudio/pull/449) New problem type for Causal Classification Modeling allows to train binary and multiclass models using LLMs.
|
61 |
-
- [PR 364](https://github.com/h2oai/h2o-llmstudio/pull/364) User secrets are now handled more securely and flexible. Support for handling secrets using the 'keyring' library was added. User settings are tried to be migrated automatically.
|
62 |
-
|
63 |
-
Please note that due to current rapid development we cannot guarantee full backwards compatibility of new functionality. We thus recommend to pin the version of the framework to the one you used for your experiments. For resetting, please delete/backup your `data` and `output` folders.
|
64 |
-
|
65 |
-
## Setup
|
66 |
-
|
67 |
-
H2O LLM Studio requires a machine with Ubuntu 16.04+ and at least one recent Nvidia GPU with Nvidia drivers version >= 470.57.02. For larger models, we recommend at least 24GB of GPU memory.
|
68 |
-
|
69 |
-
For more information about installation prerequisites, see the [Set up H2O LLM Studio](https://docs.h2o.ai/h2o-llmstudio/get-started/set-up-llm-studio#prerequisites) guide in the documentation.
|
70 |
-
|
71 |
-
For a performance comparison of different GPUs, see the [H2O LLM Studio performance](https://h2oai.github.io/h2o-llmstudio/get-started/llm-studio-performance) guide in the documentation.
|
72 |
-
|
73 |
-
### Recommended Install
|
74 |
-
|
75 |
-
The recommended way to install H2O LLM Studio is using pipenv with Python 3.10. To install Python 3.10 on Ubuntu 16.04+, execute the following commands:
|
76 |
-
|
77 |
-
#### System installs (Python 3.10)
|
78 |
-
|
79 |
-
```bash
|
80 |
-
sudo add-apt-repository ppa:deadsnakes/ppa
|
81 |
-
sudo apt install python3.10
|
82 |
-
sudo apt-get install python3.10-distutils
|
83 |
-
curl -sS https://bootstrap.pypa.io/get-pip.py | python3.10
|
84 |
-
```
|
85 |
-
|
86 |
-
#### Installing NVIDIA Drivers (if required)
|
87 |
-
|
88 |
-
If deploying on a 'bare metal' machine running Ubuntu, one may need to install the required Nvidia drivers and CUDA. The following commands show how to retrieve the latest drivers for a machine running Ubuntu 20.04 as an example. One can update the following based on their OS.
|
89 |
-
|
90 |
-
```bash
|
91 |
-
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/cuda-ubuntu2004.pin
|
92 |
-
sudo mv cuda-ubuntu2004.pin /etc/apt/preferences.d/cuda-repository-pin-600
|
93 |
-
wget https://developer.download.nvidia.com/compute/cuda/11.8.0/local_installers/cuda-repo-ubuntu2004-11-8-local_11.8.0-520.61.05-1_amd64.deb
|
94 |
-
sudo dpkg -i cuda-repo-ubuntu2004-11-8-local_11.8.0-520.61.05-1_amd64.deb
|
95 |
-
sudo cp /var/cuda-repo-ubuntu2004-11-8-local/cuda-*-keyring.gpg /usr/share/keyrings/
|
96 |
-
sudo apt-get update
|
97 |
-
sudo apt-get -y install cuda
|
98 |
-
```
|
99 |
-
|
100 |
-
alternatively, one can install cudatoolkits in a cuda environment:
|
101 |
-
|
102 |
-
```bash
|
103 |
-
conda create -n llmstudio python=3.10
|
104 |
-
conda activate llmstudio
|
105 |
-
conda install -c "nvidia/label/cuda-11.8.0" cuda-toolkit
|
106 |
-
```
|
107 |
-
|
108 |
-
#### Create virtual environment (pipenv)
|
109 |
-
|
110 |
-
The following command will create a virtual environment using pipenv and will install the dependencies using pipenv:
|
111 |
-
|
112 |
-
```bash
|
113 |
-
make setup
|
114 |
-
```
|
115 |
-
|
116 |
-
If you are having troubles installing the flash_attn package, consider running
|
117 |
-
|
118 |
-
```bash
|
119 |
-
make setup-no-flash
|
120 |
-
```
|
121 |
-
|
122 |
-
instead. This will install the dependencies without the flash_attn package. Note that this will disable the use of Flash Attention 2 and model training will be slower and consume more memory.
|
123 |
-
|
124 |
-
### Using requirements.txt
|
125 |
-
|
126 |
-
If you wish to use conda or another virtual environment, you can also install the dependencies using the requirements.txt file:
|
127 |
-
|
128 |
-
```bash
|
129 |
-
pip install -r requirements.txt
|
130 |
-
pip install flash-attn==2.5.5 --no-build-isolation # optional for Flash Attention 2
|
131 |
-
```
|
132 |
-
|
133 |
-
## Run H2O LLM Studio GUI
|
134 |
-
|
135 |
-
You can start H2O LLM Studio using the following command:
|
136 |
-
|
137 |
-
```bash
|
138 |
-
make llmstudio
|
139 |
-
```
|
140 |
-
|
141 |
-
This command will start the [H2O wave](https://github.com/h2oai/wave) server and app.
|
142 |
-
Navigate to <http://localhost:10101/> (we recommend using Chrome) to access H2O LLM Studio and start fine-tuning your models!
|
143 |
-
|
144 |
-
If you are running H2O LLM Studio with a custom environment other than Pipenv, you need to start the app as follows:
|
145 |
-
|
146 |
-
```bash
|
147 |
-
H2O_WAVE_APP_ADDRESS=http://127.0.0.1:8756 \
|
148 |
-
H2O_WAVE_MAX_REQUEST_SIZE=25MB \
|
149 |
-
H2O_WAVE_NO_LOG=true \
|
150 |
-
H2O_WAVE_PRIVATE_DIR="/download/@output/download" \
|
151 |
-
wave run app
|
152 |
-
```
|
153 |
-
|
154 |
-
## Run H2O LLM Studio GUI using Docker from a nightly build
|
155 |
-
|
156 |
-
Install Docker first by following instructions from [NVIDIA Containers](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#docker). Make sure to have `nvidia-container-toolkit` installed on your machine as outlined in the instructions.
|
157 |
-
|
158 |
-
H2O LLM Studio images are stored in the h2oai GCR vorvan container repository.
|
159 |
-
|
160 |
-
```bash
|
161 |
-
mkdir -p `pwd`/data
|
162 |
-
mkdir -p `pwd`/output
|
163 |
-
|
164 |
-
# make sure to pull latest image if you still have a prior version cached
|
165 |
-
docker pull gcr.io/vorvan/h2oai/h2o-llmstudio:nightly
|
166 |
-
|
167 |
-
# run the container
|
168 |
-
docker run \
|
169 |
-
--runtime=nvidia \
|
170 |
-
--shm-size=64g \
|
171 |
-
--init \
|
172 |
-
--rm \
|
173 |
-
-u `id -u`:`id -g` \
|
174 |
-
-p 10101:10101 \
|
175 |
-
-v `pwd`/data:/workspace/data \
|
176 |
-
-v `pwd`/output:/workspace/output \
|
177 |
-
-v ~/.cache:/home/llmstudio/.cache \
|
178 |
-
gcr.io/vorvan/h2oai/h2o-llmstudio:nightly
|
179 |
-
```
|
180 |
-
|
181 |
-
Navigate to <http://localhost:10101/> (we recommend using Chrome) to access H2O LLM Studio and start fine-tuning your models!
|
182 |
-
|
183 |
-
(Note other helpful docker commands are `docker ps` and `docker kill`.)
|
184 |
-
|
185 |
-
## Run H2O LLM Studio GUI by building your own Docker image
|
186 |
-
|
187 |
-
```bash
|
188 |
-
docker build -t h2o-llmstudio .
|
189 |
-
|
190 |
-
mkdir -p `pwd`/data
|
191 |
-
mkdir -p `pwd`/output
|
192 |
-
|
193 |
-
docker run \
|
194 |
-
--runtime=nvidia \
|
195 |
-
--shm-size=64g \
|
196 |
-
--init \
|
197 |
-
--rm \
|
198 |
-
-u `id -u`:`id -g` \
|
199 |
-
-p 10101:10101 \
|
200 |
-
-v `pwd`/data:/workspace/data \
|
201 |
-
-v `pwd`/output:/workspace/output \
|
202 |
-
-v ~/.cache:/home/llmstudio/.cache \
|
203 |
-
h2o-llmstudio
|
204 |
-
```
|
205 |
-
|
206 |
-
Alternatively, you can run H2O LLM Studio GUI by using our self-hosted Docker image available [here](https://console.cloud.google.com/gcr/images/vorvan/global/h2oai/h2o-llmstudio).
|
207 |
-
|
208 |
-
## Run H2O LLM Studio with command line interface (CLI)
|
209 |
-
|
210 |
-
You can also use H2O LLM Studio with the command line interface (CLI) and specify the configuration .yaml file that contains all the experiment parameters. To finetune using H2O LLM Studio with CLI, activate the pipenv environment by running `make shell`, and then use the following command:
|
211 |
-
|
212 |
-
```bash
|
213 |
-
python train.py -Y {path_to_config_yaml_file}
|
214 |
-
```
|
215 |
-
|
216 |
-
To run on multiple GPUs in DDP mode, run the following command:
|
217 |
-
|
218 |
-
```bash
|
219 |
-
bash distributed_train.sh {NR_OF_GPUS} -Y {path_to_config_yaml_file}
|
220 |
-
```
|
221 |
-
|
222 |
-
By default, the framework will run on the first `k` GPUs. If you want to specify specific GPUs to run on, use the `CUDA_VISIBLE_DEVICES` environment variable before the command.
|
223 |
-
|
224 |
-
To start an interactive chat with your trained model, use the following command:
|
225 |
-
|
226 |
-
```bash
|
227 |
-
python prompt.py -e {experiment_name}
|
228 |
-
```
|
229 |
-
|
230 |
-
where `experiment_name` is the output folder of the experiment you want to chat with (see configuration).
|
231 |
-
The interactive chat will also work with model that were finetuned using the UI.
|
232 |
-
|
233 |
-
To publish the model to Hugging Face, use the following command:
|
234 |
-
|
235 |
-
```bash
|
236 |
-
make shell
|
237 |
-
|
238 |
-
python publish_to_hugging_face.py -p {path_to_experiment} -d {device} -a {api_key} -u {user_id} -m {model_name} -s {safe_serialization}
|
239 |
-
```
|
240 |
-
|
241 |
-
`path_to_experiment` is the output folder of the experiment.
|
242 |
-
`device` is the target device for running the model, either 'cpu' or 'cuda:0'. Default is 'cuda:0'.
|
243 |
-
`api_key` is the Hugging Face API Key. If user logged in, it can be omitted.
|
244 |
-
`user_id` is the Hugging Face user ID. If user logged in, it can be omitted.
|
245 |
-
`model_name` is the name of the model to be published on Hugging Face. It can be omitted.
|
246 |
-
`safe_serialization` is a flag indicating whether safe serialization should be used. Default is True.
|
247 |
-
|
248 |
-
## Data format and example data
|
249 |
-
|
250 |
-
For details on the data format required when importing your data or example data that you can use to try out H2O LLM Studio, see [Data format](https://docs.h2o.ai/h2o-llmstudio/guide/datasets/data-connectors-format#data-format) in the H2O LLM Studio documentation.
|
251 |
-
|
252 |
-
## Training your model
|
253 |
-
|
254 |
-
With H2O LLM Studio, training your large language model is easy and intuitive. First, upload your dataset and then start training your model. Start by [creating an experiment](https://docs.h2o.ai/h2o-llmstudio/guide/experiments/create-an-experiment). You can then [monitor and manage your experiment](https://docs.h2o.ai/h2o-llmstudio/guide/experiments/view-an-experiment), [compare experiments](https://docs.h2o.ai/h2o-llmstudio/guide/experiments/compare-experiments), or [push the model to Hugging Face](https://docs.h2o.ai/h2o-llmstudio/guide/experiments/export-trained-model) to share it with the community.
|
255 |
-
|
256 |
-
## Example: Run on OASST data via CLI
|
257 |
-
|
258 |
-
As an example, you can run an experiment on the OASST data via CLI. For instructions, see [Run an experiment on the OASST data](https://docs.h2o.ai/h2o-llmstudio/guide/experiments/create-an-experiment#run-an-experiment-on-the-oasst-data-via-cli) guide in the H2O LLM Studio documentation.
|
259 |
-
|
260 |
-
## Model checkpoints
|
261 |
-
|
262 |
-
All open-source datasets and models are posted on [H2O.ai's Hugging Face page](https://huggingface.co/h2oai/) and our [H2OGPT](https://github.com/h2oai/h2ogpt) repository.
|
263 |
-
|
264 |
-
## Documentation
|
265 |
-
|
266 |
-
Detailed documentation and frequently asked questions (FAQs) for H2O LLM Studio can be found at <https://docs.h2o.ai/h2o-llmstudio/>. If you wish to contribute to the docs, navigate to the `/documentation` folder of this repo and refer to the [README.md](documentation/README.md) for more information.
|
267 |
-
|
268 |
-
## Contributing
|
269 |
-
|
270 |
-
We are happy to accept contributions to the H2O LLM Studio project. Please refer to the [CONTRIBUTING.md](CONTRIBUTING.md) file for more information.
|
271 |
-
|
272 |
-
## License
|
273 |
-
|
274 |
-
H2O LLM Studio is licensed under the Apache 2.0 license. Please see the [LICENSE](LICENSE) file for more information.
|
|
|
1 |
+
title: H2ogpt Chatbot
|
2 |
+
emoji: 📚
|
3 |
+
colorFrom: yellow
|
4 |
+
colorTo: yellow
|
5 |
+
sdk: gradio
|
6 |
+
sdk_version: 3.41.2
|
7 |
+
app_file: app.py
|
8 |
+
pinned: false
|
9 |
+
license: apache-2.0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|