STEM-AI-mtl commited on
Commit
2536e0a
1 Parent(s): 80d01d4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -5
README.md CHANGED
@@ -11,10 +11,10 @@ tags:
11
  ---
12
  # Model Card for Model ID
13
 
14
- This is the adapters from the LoRa fine-tuning of the phi-2 model from Microsoft. It was trained on the Electrical-engineering dataset.
15
 
16
  - **Developed by:** STEM.AI
17
- - **Model type:** Q&A
18
  - **Language(s) (NLP):** English
19
  - **Finetuned from model [optional]:** microsoft/phi-2
20
 
@@ -28,16 +28,19 @@ Q&A related to electrical engineering, and Kicad software. Creation of Python co
28
 
29
  ### Training Data
30
 
31
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
32
 
33
- [More Information Needed]
 
 
34
 
35
  ### Training Procedure
36
 
37
- A LoRa PEFT was performed on a 48 Gb Nvidia GPU.
38
 
39
  ## Model Card Authors [optional]
40
 
41
  STEM.AI: stem.ai.mtl@gmail.com
 
42
 
43
 
 
11
  ---
12
  # Model Card for Model ID
13
 
14
+ This is the adapters from the LoRa fine-tuning of the phi-2 model from Microsoft. It was trained on the Electrical-engineering dataset combined with garage-bAInd/Open-Platypus.
15
 
16
  - **Developed by:** STEM.AI
17
+ - **Model type:** Q&A and code generation
18
  - **Language(s) (NLP):** English
19
  - **Finetuned from model [optional]:** microsoft/phi-2
20
 
 
28
 
29
  ### Training Data
30
 
31
+ Dataset related to electrical engineering: STEM-AI-mtl/Electrical-engineering
32
 
33
+ Combined with
34
+
35
+ Dataset related to STEM and NLP: garage-bAInd/Open-Platypus
36
 
37
  ### Training Procedure
38
 
39
+ A LoRa PEFT was performed on a 48 Gb A40 Nvidia GPU.
40
 
41
  ## Model Card Authors [optional]
42
 
43
  STEM.AI: stem.ai.mtl@gmail.com
44
+ William Harbec
45
 
46