please start adding the tokenizer.template
Many quant people don't include the template. Which makes it a pain to search around and try to find the right template for a specific model. But some people who make quants add the template. This is super useful when using ollama. I currently have a folder with model files for many templates, but a lot of the time they don't work or it's tricky to find the right one for a specific model. But when the template is added to the gguf, all a ollama user has to do is have a blank model file with a simple FROM line and it just works. Thank you for all the quants you do provide however.
I don't know of anybody who manually adds the chat template (if you mean that). If the model has correctly specified a template, it will end up in the quant. If not, then this is a bug in llama.cpp. And as you can easily see, most models specify a chat template, and therefore most of my quants have it, too. And specifically the model you mention has a a chat template in my quants of it, which makes your request even more puzzling.
Ah I didn't know this, thank you for clarifying ^_^