Inquiry about details

#1
by jiayiplus - opened

Can the author provide an introduction to this model? I am very interested to know how these models are generated. Is it through knowledge distillation? If so, what are the teacher model and student model respectively?

The model was distilled using logit-based knowledge distillation on an 80GB GPU, focusing on the logits with an alpha of 0.8 over two epochs. Trained on the Sonnet-3.5-ITA-INSTRUCT dataset with 70k examples, this setup allows the student to closely mimic the teacher’s predictive patterns while remaining efficient.
Teacher Moldel was llama3.1-70b

DeepMount00 changed discussion status to closed

Sign up or log in to comment