chat_gpt2_dpo-GGUF / README.md
morriszms's picture
Upload folder using huggingface_hub
88b940b verified
metadata
language:
  - en
license: apache-2.0
tags:
  - gpt2
  - dpo
  - trl
  - TensorBlock
  - GGUF
datasets:
  - HuggingFaceH4/ultrachat_200k
  - Intel/orca_dpo_pairs
pipeline_tag: text-generation
base_model: Sharathhebbar24/chat_gpt2_dpo
model-index:
  - name: chat_gpt2_dpo
    results:
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: AI2 Reasoning Challenge (25-Shot)
          type: ai2_arc
          config: ARC-Challenge
          split: test
          args:
            num_few_shot: 25
        metrics:
          - type: acc_norm
            value: 23.98
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Sharathhebbar24/chat_gpt2_dpo
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: HellaSwag (10-Shot)
          type: hellaswag
          split: validation
          args:
            num_few_shot: 10
        metrics:
          - type: acc_norm
            value: 31.22
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Sharathhebbar24/chat_gpt2_dpo
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MMLU (5-Shot)
          type: cais/mmlu
          config: all
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 24.95
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Sharathhebbar24/chat_gpt2_dpo
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: TruthfulQA (0-shot)
          type: truthful_qa
          config: multiple_choice
          split: validation
          args:
            num_few_shot: 0
        metrics:
          - type: mc2
            value: 41.26
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Sharathhebbar24/chat_gpt2_dpo
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: Winogrande (5-shot)
          type: winogrande
          config: winogrande_xl
          split: validation
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 49.96
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Sharathhebbar24/chat_gpt2_dpo
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: GSM8k (5-shot)
          type: gsm8k
          config: main
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 0
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Sharathhebbar24/chat_gpt2_dpo
          name: Open LLM Leaderboard
TensorBlock

Feedback and support: TensorBlock's Twitter/X, Telegram Group and Discord server

Sharathhebbar24/chat_gpt2_dpo - GGUF

This repo contains GGUF format model files for Sharathhebbar24/chat_gpt2_dpo.

The files were quantized using machines provided by TensorBlock, and they are compatible with llama.cpp as of commit b4242.

Prompt template


Model file specification

Filename Quant type File Size Description
chat_gpt2_dpo-Q2_K.gguf Q2_K 0.081 GB smallest, significant quality loss - not recommended for most purposes
chat_gpt2_dpo-Q3_K_S.gguf Q3_K_S 0.090 GB very small, high quality loss
chat_gpt2_dpo-Q3_K_M.gguf Q3_K_M 0.098 GB very small, high quality loss
chat_gpt2_dpo-Q3_K_L.gguf Q3_K_L 0.102 GB small, substantial quality loss
chat_gpt2_dpo-Q4_0.gguf Q4_0 0.107 GB legacy; small, very high quality loss - prefer using Q3_K_M
chat_gpt2_dpo-Q4_K_S.gguf Q4_K_S 0.107 GB small, greater quality loss
chat_gpt2_dpo-Q4_K_M.gguf Q4_K_M 0.113 GB medium, balanced quality - recommended
chat_gpt2_dpo-Q5_0.gguf Q5_0 0.122 GB legacy; medium, balanced quality - prefer using Q4_K_M
chat_gpt2_dpo-Q5_K_S.gguf Q5_K_S 0.122 GB large, low quality loss - recommended
chat_gpt2_dpo-Q5_K_M.gguf Q5_K_M 0.127 GB large, very low quality loss - recommended
chat_gpt2_dpo-Q6_K.gguf Q6_K 0.138 GB very large, extremely low quality loss
chat_gpt2_dpo-Q8_0.gguf Q8_0 0.178 GB very large, extremely low quality loss - not recommended

Downloading instruction

Command line

Firstly, install Huggingface Client

pip install -U "huggingface_hub[cli]"

Then, downoad the individual model file the a local directory

huggingface-cli download tensorblock/chat_gpt2_dpo-GGUF --include "chat_gpt2_dpo-Q2_K.gguf" --local-dir MY_LOCAL_DIR

If you wanna download multiple model files with a pattern (e.g., *Q4_K*gguf), you can try:

huggingface-cli download tensorblock/chat_gpt2_dpo-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'