Text Generation
Transformers
Safetensors
mistral
Inference Endpoints
text-generation-inference
4-bit precision
awq
SinanAkkoyun's picture
Update README.md
bb6165f
metadata
license: apache-2.0
base_model: mistralai/Mistral-7B-v0.1
datasets:
  - Open-Orca/SlimOrca
  - HuggingFaceH4/no_robots
  - Intel/orca_dpo_pairs
  - rizerphe/glaive-function-calling-v2-zephyr
  - codefuse-ai/Evol-instruction-66k
library_name: transformers
pipeline_tag: text-generation

AWQ GEMM quant of TokenBender/pic_7B_mistral_Full_v0.2

pic_7B_mistral_Full_v0.2

PIC_7B_Mistral (First phase)

This model is a fine-tuned version of mistralai/Mistral-7B-v0.1 A curated, decontaminated subset of datasets used have been mentioned in the model card. All used datasets are public as of the time of release of this model.

Collaborate or Consult me - Twitter, Discord

Recommended format is ChatML, Alpaca will work but take care of EOT token

Chat Model Inference

Model description

First generic model of Project PIC (Partner-in-Crime) in 7B range. Trying a bunch of things and seeing what sticks right now.

Empathy + Coder + Instruction/json/function adherence is my game. Finding lots of challenges and insights in this effort, patience is key. image/png

Intended uses & limitations

Should be useful in generic capacity. Demonstrates little bit of everything.

Basic tests in - Roleplay: Adherence to character present. json/function-calling: Passing Coding: To be evaluated

Training procedure

SFT + DPO

Training results

Humaneval and evalplus results to be shared as well. image/png

Framework versions

  • Transformers 4.35.2
  • Pytorch 2.0.1
  • Datasets 2.15.0
  • Tokenizers 0.15.0