--- license: other datasets: - DrNicefellow/CHAT-ALL-IN-ONE-v1 license_name: yi-license license_link: https://huggingface.co/01-ai/Yi-34B-200K/blob/main/LICENSE model-index: - name: ChatAllInOne-Yi-34B-200K-V1 results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 65.96 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=DrNicefellow/ChatAllInOne-Yi-34B-200K-V1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 84.53 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=DrNicefellow/ChatAllInOne-Yi-34B-200K-V1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 74.13 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=DrNicefellow/ChatAllInOne-Yi-34B-200K-V1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 56.96 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=DrNicefellow/ChatAllInOne-Yi-34B-200K-V1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 82.72 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=DrNicefellow/ChatAllInOne-Yi-34B-200K-V1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 59.06 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=DrNicefellow/ChatAllInOne-Yi-34B-200K-V1 name: Open LLM Leaderboard --- # ChatAllInOne-Yi-34B-200K-V1 ## Description ChatAllInOne-Yi-34B-200K-V1 is a chat language model fine-tuned on the CHAT-ALL-IN-ONE-v1 dataset using the QLoRA technique with the unsloth tool. Originally based on the 01-ai/Yi-34B-200K model, this version is specifically optimized for diverse and comprehensive chat applications. ## Model Details - **Base Model**: [01-ai/Yi-34B-200K](https://huggingface.co/01-ai/Yi-34B-200K) - **Fine-tuning Technique**: QLoRA (Quantum Logic-based Reasoning Approach) - **Dataset**: [CHAT-ALL-IN-ONE-v1](https://huggingface.co/datasets/DrNicefellow/CHAT-ALL-IN-ONE-v1) - **Tool Used for Fine-tuning**: [unsloth](https://github.com/unslothai/unsloth) ## Features - Enhanced understanding and generation of conversational language. - Improved performance in diverse chat scenarios, including casual, formal, and domain-specific conversations. - Fine-tuned to maintain context and coherence over longer dialogues. ## Prompt Format Vicuna 1.1 See the finetuning dataset for examples. ## License This model is open-sourced under the [Yi License](https://huggingface.co/01-ai/Yi-34B-200K/blob/main/LICENSE). ## Feeling Generous? 😊 Eager to buy me a cup of 2$ coffe or iced tea?🍵☕ Sure, here is the link: [https://ko-fi.com/drnicefellow](https://ko-fi.com/drnicefellow). Please add a note on which one you want me to drink? # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_DrNicefellow__ChatAllInOne-Yi-34B-200K-V1) | Metric |Value| |---------------------------------|----:| |Avg. |70.56| |AI2 Reasoning Challenge (25-Shot)|65.96| |HellaSwag (10-Shot) |84.53| |MMLU (5-Shot) |74.13| |TruthfulQA (0-shot) |56.96| |Winogrande (5-shot) |82.72| |GSM8k (5-shot) |59.06|