nbeerbower
commited on
Commit
•
5735876
1
Parent(s):
ef0b44d
Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,21 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
library_name: transformers
|
4 |
+
base_model:
|
5 |
+
- mistralai/Mistral-7B-Instruct-v0.2
|
6 |
+
datasets:
|
7 |
+
- jondurbin/gutenberg-dpo-v0.1
|
8 |
+
- nbeerbower/gutenberg2-dpo
|
9 |
+
---
|
10 |
+
|
11 |
+
![image/png](https://huggingface.co/nbeerbower/Mistral-Small-Gutenberg-Doppel-22B/resolve/main/doppel-header?download=true)
|
12 |
+
|
13 |
+
# Mistral-Gutenberg-Doppel-7B-FFT
|
14 |
+
|
15 |
+
[mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) finetuned on [jondurbin/gutenberg-dpo-v0.1](https://huggingface.co/datasets/jondurbin/gutenberg-dpo-v0.1) and [nbeerbower/gutenberg2-dpo](https://huggingface.co/datasets/nbeerbower/gutenberg2-dpo).
|
16 |
+
|
17 |
+
This is a full finetune rather than my usual QLoRA tunes. Mostly for learning purposes.
|
18 |
+
|
19 |
+
### Method
|
20 |
+
|
21 |
+
[ORPO tuned](https://mlabonne.github.io/blog/posts/2024-04-19_Fine_tune_Llama_3_with_ORPO.html) with 4x A100 for 2 epochs.
|