lemonilia commited on
Commit
f170976
·
1 Parent(s): 9e59333

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -81,7 +81,7 @@ your desired response length:
81
 
82
  ## Training procedure
83
  [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) was used for training
84
- on a 4x NVidia A40 GPU. The model has been trained as an 8-bit LoRA adapter, and
85
  it's so large because a LoRA rank of 256 was also used. The reasoning was that this
86
  might have helped the model internalize any newly acquired information, making the
87
  training process closer to a full finetune.
 
81
 
82
  ## Training procedure
83
  [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) was used for training
84
+ on a 4x NVidia A40 GPU cluster. The model has been trained as an 8-bit LoRA adapter, and
85
  it's so large because a LoRA rank of 256 was also used. The reasoning was that this
86
  might have helped the model internalize any newly acquired information, making the
87
  training process closer to a full finetune.