Pre-trained models in MiniPLM: Knowledge Distillation for Pre-Training Language Models
MiniLLM
community
AI & ML interests
Training efficient language models (MiniLLM, MiniPLM)
models
46
MiniLLM/MiniPLM-llama3.1-212M
Text Generation
•
Updated
MiniLLM/MiniPLM-Mamba-130M
Text Generation
•
Updated
MiniLLM/Pretrain-Qwen-200M
Text Generation
•
Updated
MiniLLM/Pretrain-Qwen-500M
Text Generation
•
Updated
•
4
MiniLLM/Pretrain-Qwen-1.2B
Text Generation
•
Updated
MiniLLM/MiniPLM-Qwen-200M
Text Generation
•
Updated
•
6
MiniLLM/MiniPLM-Qwen-500M
Text Generation
•
Updated
•
7
•
2
MiniLLM/MiniPLM-Qwen-1.2B
Text Generation
•
Updated
•
5
MiniLLM/init-Llama-7B
Text Generation
•
Updated
•
8
MiniLLM/teacher-Llama-13B
Text Generation
•
Updated
datasets
9
MiniLLM/pile-diff_samp-qwen_1.8B-qwen_104M-r0.5
Updated
MiniLLM/roberta-corpus-processed
Updated
MiniLLM/openwebtext-processed
Updated
MiniLLM/dolly-processed
Viewer
•
Updated
•
110k
•
1
MiniLLM/sinst
Viewer
•
Updated
•
8.35k
MiniLLM/uinst
Viewer
•
Updated
•
64.8k
MiniLLM/self-inst
Viewer
•
Updated
•
242
MiniLLM/Vicuna
Viewer
•
Updated
•
80
MiniLLM/dolly
Viewer
•
Updated
•
500
•
16