Locutusque
commited on
Commit
•
ac95f1f
1
Parent(s):
346605e
Update README.md
Browse files
README.md
CHANGED
@@ -5,4 +5,6 @@ tags: []
|
|
5 |
|
6 |
# lr-experiment1-7B
|
7 |
|
8 |
-
The lr-experiment model series is a research project I'm conducting that I will be using to determine the best learning rate to use while fine-tuning Mistral. This model uses a learning rate of 2e-5 with a cosine scheduler and no warmup steps.
|
|
|
|
|
|
5 |
|
6 |
# lr-experiment1-7B
|
7 |
|
8 |
+
The lr-experiment model series is a research project I'm conducting that I will be using to determine the best learning rate to use while fine-tuning Mistral. This model uses a learning rate of 2e-5 with a cosine scheduler and no warmup steps.
|
9 |
+
|
10 |
+
I used Locutusque/Hercules-2.0-Mistral-7B as a base model, and further fine-tuned it on CollectiveCognition/chats-data-2023-09-22 for 3 epochs. I will be keeping track of evaluation results, and will comparing it to upcoming models.
|