Commit
•
f070a7a
1
Parent(s):
ac9b3ad
Update README.md
Browse files
README.md
CHANGED
@@ -7,6 +7,8 @@ pipeline_tag: text-generation
|
|
7 |
tags:
|
8 |
- companion
|
9 |
- chat
|
|
|
|
|
10 |
---
|
11 |
|
12 |
Trained on [phi-2](https://huggingface.co/microsoft/phi-2) as a base model, this Samantha was trained in 3,5 hours on a RTX3090 24GB with [Samantha-1.0-Phi2](https://huggingface.co/datasets/WasamiKirua/Samatha-Phi2-ENG) Dataset
|
@@ -31,4 +33,4 @@ I'm working on an ITA/ENG version. I plan to merge several dataset and train the
|
|
31 |
thanks, greetings, respect and love to:
|
32 |
|
33 |
https://huggingface.co/cognitivecomputations for the Inspiration and the starting dataset which I've used for this Phi-2 fine tuning
|
34 |
-
https://medium.com/@geronimo7 - https://twitter.com/Geronimo_AI for the wonderful article on Medium.com which helped me out a ton
|
|
|
7 |
tags:
|
8 |
- companion
|
9 |
- chat
|
10 |
+
datasets:
|
11 |
+
- WasamiKirua/Samatha-Phi2-ENG
|
12 |
---
|
13 |
|
14 |
Trained on [phi-2](https://huggingface.co/microsoft/phi-2) as a base model, this Samantha was trained in 3,5 hours on a RTX3090 24GB with [Samantha-1.0-Phi2](https://huggingface.co/datasets/WasamiKirua/Samatha-Phi2-ENG) Dataset
|
|
|
33 |
thanks, greetings, respect and love to:
|
34 |
|
35 |
https://huggingface.co/cognitivecomputations for the Inspiration and the starting dataset which I've used for this Phi-2 fine tuning
|
36 |
+
https://medium.com/@geronimo7 - https://twitter.com/Geronimo_AI for the wonderful article on Medium.com which helped me out a ton
|