STEM-AI-mtl
commited on
Commit
•
cd0f87b
1
Parent(s):
3b790ed
Update README.md
Browse files
README.md
CHANGED
@@ -26,7 +26,7 @@ This is the adapters from the LoRa fine-tuning of the phi-2 model from Microsoft
|
|
26 |
|
27 |
Q&A related to electrical engineering, and Kicad software. Creation of Python code in general, and for Kicad's scripting console.
|
28 |
|
29 |
-
Refer to
|
30 |
|
31 |
## Training Details
|
32 |
|
@@ -39,10 +39,15 @@ Combined with
|
|
39 |
Dataset related to STEM and NLP: garage-bAInd/Open-Platypus
|
40 |
|
41 |
### Training Procedure
|
|
|
42 |
|
43 |
A LoRa PEFT was performed on a 48 Gb A40 Nvidia GPU.
|
44 |
|
45 |
## Model Card Authors [optional]
|
46 |
|
47 |
STEM.AI: stem.ai.mtl@gmail.com
|
48 |
-
William Harbec
|
|
|
|
|
|
|
|
|
|
26 |
|
27 |
Q&A related to electrical engineering, and Kicad software. Creation of Python code in general, and for Kicad's scripting console.
|
28 |
|
29 |
+
Refer to microsoft/phi-2 model card for recommended prompt format.
|
30 |
|
31 |
## Training Details
|
32 |
|
|
|
39 |
Dataset related to STEM and NLP: garage-bAInd/Open-Platypus
|
40 |
|
41 |
### Training Procedure
|
42 |
+
LoRa script: https://github.com/STEM-ai/Phi-2/blob/4eaa6aaa2679427a810ace5a061b9c951942d66a/LoRa.py
|
43 |
|
44 |
A LoRa PEFT was performed on a 48 Gb A40 Nvidia GPU.
|
45 |
|
46 |
## Model Card Authors [optional]
|
47 |
|
48 |
STEM.AI: stem.ai.mtl@gmail.com
|
49 |
+
William Harbec
|
50 |
+
|
51 |
+
### Inference example
|
52 |
+
|
53 |
+
https://github.com/STEM-ai/Phi-2/blob/4eaa6aaa2679427a810ace5a061b9c951942d66a/chat.py
|