abhigoyal's picture
vLLM compatible randon Medusa heads for llama-68m (only to test code)
d92a54c
raw
history blame contribute delete
262 Bytes
{"_name_or_path": "abhigoyal/vllm-medusa-llama-68m-random", "architectures": ["MedusaModel"], "hidden_size": 768, "model_type": "medusa", "num_heads": 5, "num_hidden_layers": 1, "transformers_version": "4.41.2", "truncated_vocab_size": null, "vocab_size": 32000}