Update README.md
Browse files
README.md
CHANGED
@@ -6,7 +6,7 @@ license: apache-2.0
|
|
6 |
MiniSymposium is an experimental QLora model that I created based on Mistral 7b. I created it attempting to achieve these goals:
|
7 |
|
8 |
1. Demonstrate the untapped potential of using a small, focused dataset of handwritten examples instead of training on a large amount of synthetic GPT outputs
|
9 |
-
2. Create a dataset that allows the model to explore different possible answers from multiple perspectives before reaching a conclusion
|
10 |
3. Develop a model that performs well across various prompt formats, rather than overfitting to a specific kind of format
|
11 |
|
12 |
The current trend in QLora/Lora-based finetuning (and finetuning in general for local LLMs) is to use large synthetic datasets. These are usually GPT datasets that are trained with higher learning rates.
|
|
|
6 |
MiniSymposium is an experimental QLora model that I created based on Mistral 7b. I created it attempting to achieve these goals:
|
7 |
|
8 |
1. Demonstrate the untapped potential of using a small, focused dataset of handwritten examples instead of training on a large amount of synthetic GPT outputs
|
9 |
+
2. Create a dataset that allows the model to explore different possible answers from multiple perspectives before reaching a conclusion
|
10 |
3. Develop a model that performs well across various prompt formats, rather than overfitting to a specific kind of format
|
11 |
|
12 |
The current trend in QLora/Lora-based finetuning (and finetuning in general for local LLMs) is to use large synthetic datasets. These are usually GPT datasets that are trained with higher learning rates.
|