Update README.md
Browse files
README.md
CHANGED
@@ -14,6 +14,8 @@ datasets:
|
|
14 |
|
15 |
## **Nous-Capybara-7B**
|
16 |
|
|
|
|
|
17 |
A model created with a novel synthesis method in mind, Amplify-instruct, with a goal of having a synergistic combination of different techniques used for SOTA models such as Evol-Instruct, Orca, Vicuna, Lamini, FLASK and others, all into one lean holistically formed dataset and model. The seed instructions used for the start of synthesized conversations are largely based on highly acclaimed datasets like Airoboros, Know logic, EverythingLM, GPTeacher and even entirely new seed instructions derived from posts on the website LessWrong, as well as being supplemented with certain multi-turn datasets like Dove(A successor to Puffin).
|
18 |
|
19 |
Entirely contained under 20K training examples, mostly comprised of newly synthesized tokens never used for model training until now!
|
|
|
14 |
|
15 |
## **Nous-Capybara-7B**
|
16 |
|
17 |
+
**MUCH BETTER MISTRAL BASED VERSION COMING SOON AS CAPYBARA V2**
|
18 |
+
|
19 |
A model created with a novel synthesis method in mind, Amplify-instruct, with a goal of having a synergistic combination of different techniques used for SOTA models such as Evol-Instruct, Orca, Vicuna, Lamini, FLASK and others, all into one lean holistically formed dataset and model. The seed instructions used for the start of synthesized conversations are largely based on highly acclaimed datasets like Airoboros, Know logic, EverythingLM, GPTeacher and even entirely new seed instructions derived from posts on the website LessWrong, as well as being supplemented with certain multi-turn datasets like Dove(A successor to Puffin).
|
20 |
|
21 |
Entirely contained under 20K training examples, mostly comprised of newly synthesized tokens never used for model training until now!
|