Continue Pretraining
#7 opened 3 months ago
by
HuggySSO
Embedding from transformers
#6 opened 3 months ago
by
tillwenke
"[...] mixture of full fine-tuning and LoRA was used to provide better generalization."
#5 opened 4 months ago
by
bobox