File size: 1,998 Bytes
d1a79ec da2cadc 5362642 da2cadc 915124b da2cadc b890398 508743a b815f19 508743a 5348a89 da2cadc 5362642 da2cadc 61218a4 da2cadc 915124b b890398 b815f19 508743a 5348a89 508743a |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 |
---
language:
- en
- zh
tags:
- qwen
- llama
- llama-2
---
[WIP]
This is the LLaMAfied version of [Qwen/Qwen-7B-Chat](https://huggingface.co/Qwen/Qwen-7B-Chat), recalibrated to fit the original LLaMA/LLaMA-2-like model structure.
You can use LlamaForCausalLM for model inference, which is the same as LLaMA/LLaMA-2 models (the tokenizer remains the same, so you still need to allow external codes when loading, eg: `AutoTokenizer.from_pretrained(llama_model_path, use_fast=False, trust_remote_code=True)`).
The model has been edited to be white-labelled, meaning the model will no longer call itself a Qwen.
SPOILOR: Further finetuning is in progress, the current version is a work-in-progress, some knowledge may be biased and illusory due to structural changes. Will be updated very, very sooooooooooon.
PROMPT FORMAT: [chatml](https://github.com/openai/openai-python/blob/main/chatml.md)
CURRENT MMLU: 50.36
Issue: Compared to the original Qwen-Chat scoring 53.9, the MMLU score dropped slightly (-3.54) due to insufficient realignment.
[在制品]
这是 [通义千问 Qwen/Qwen-7B-Chat](https://huggingface.co/Qwen/Qwen-7B-Chat) 的 LLaMA 化版本,经过重新校准以适应原始的类似 LLaMA/LLaMA-2 的模型结构。
您可以使用 LlamaCausalLM 进行模型推理,和 LLaMA/LLaMA-2 保持一致(分词器保持不变,因此加载时仍然需要允许外部代码,例如:`AutoTokenizer.from_pretrained(llama_model_path, use_fast=False, trust_remote_code=True)`)。
模型已经被编辑实现白标化,不再自称通义千问。
剧透: 进一步的微调正在进行中,当前版本是一个正在进行的工作,一些知识可能由于结构变化而产生偏见和幻觉。 会更新,很快,非常非常非常快。
PROMPT 格式: [chatml](https://github.com/openai/openai-python/blob/main/chatml.md)
当前的 MMLU: 50.36
问题:相比原本的Qwen-Chat的53.9,由于不够充分的重新对齐,MMLU分数略有下降(-3.54)。 |