language:
- fr
- en
license: mit
library_name: transformers
tags:
- french
- chocolatine
datasets:
- jpacifico/french-orca-dpo-pairs-revised
pipeline_tag: text-generation
model-index:
- name: Chocolatine-3B-Instruct-DPO-Revised
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 56.23
name: strict accuracy
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=jpacifico/Chocolatine-3B-Instruct-DPO-Revised
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 37.16
name: normalized accuracy
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=jpacifico/Chocolatine-3B-Instruct-DPO-Revised
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 14.5
name: exact match
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=jpacifico/Chocolatine-3B-Instruct-DPO-Revised
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 9.62
name: acc_norm
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=jpacifico/Chocolatine-3B-Instruct-DPO-Revised
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 15.1
name: acc_norm
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=jpacifico/Chocolatine-3B-Instruct-DPO-Revised
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 33.21
name: accuracy
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=jpacifico/Chocolatine-3B-Instruct-DPO-Revised
name: Open LLM Leaderboard
Chocolatine-3B-Instruct-DPO-Revised
DPO fine-tuned of microsoft/Phi-3-mini-4k-instruct (3.82B params)
using the jpacifico/french-orca-dpo-pairs-revised rlhf dataset.
Chocolatine is a general model and can itself be finetuned to be specialized for specific use cases.
Window context = 4k tokens
Benchmarks
Submitted on OpenLLM Leaderboard, results in few days !
First version Chocolatine-3B-Instruct-DPO-v1.0 is already one of the best-performing 3B models on the Open LLM Leaderboard
MT-Bench-French
Chocolatine-3B-Instruct-DPO-Revised is outperforming GPT-3.5-Turbo on MT-Bench-French by Bofeng Huang,
used with multilingual-mt-bench
########## First turn ##########
score
model turn
gpt-3.5-turbo 1 8.1375
Chocolatine-3B-Instruct-DPO-Revised 1 7.9875
Daredevil-8B 1 7.8875
Daredevil-8B-abliterated 1 7.8375
Chocolatine-3B-Instruct-DPO-v1.0 1 7.6875
NeuralDaredevil-8B-abliterated 1 7.6250
Phi-3-mini-4k-instruct 1 7.2125
Meta-Llama-3-8B-Instruct 1 7.1625
vigostral-7b-chat 1 6.7875
Mistral-7B-Instruct-v0.3 1 6.7500
Mistral-7B-Instruct-v0.2 1 6.2875
French-Alpaca-7B-Instruct_beta 1 5.6875
vigogne-2-7b-chat 1 5.6625
vigogne-2-7b-instruct 1 5.1375
########## Second turn ##########
score
model turn
Chocolatine-3B-Instruct-DPO-Revised 2 7.937500
gpt-3.5-turbo 2 7.679167
Chocolatine-3B-Instruct-DPO-v1.0 2 7.612500
NeuralDaredevil-8B-abliterated 2 7.125000
Daredevil-8B 2 7.087500
Daredevil-8B-abliterated 2 6.873418
Meta-Llama-3-8B-Instruct 2 6.800000
Mistral-7B-Instruct-v0.2 2 6.512500
Mistral-7B-Instruct-v0.3 2 6.500000
Phi-3-mini-4k-instruct 2 6.487500
vigostral-7b-chat 2 6.162500
French-Alpaca-7B-Instruct_beta 2 5.487395
vigogne-2-7b-chat 2 2.775000
vigogne-2-7b-instruct 2 2.240506
########## Average ##########
score
model
Chocolatine-3B-Instruct-DPO-Revised 7.962500
gpt-3.5-turbo 7.908333
Chocolatine-3B-Instruct-DPO-v1.0 7.650000
Daredevil-8B 7.487500
NeuralDaredevil-8B-abliterated 7.375000
Daredevil-8B-abliterated 7.358491
Meta-Llama-3-8B-Instruct 6.981250
Phi-3-mini-4k-instruct 6.850000
Mistral-7B-Instruct-v0.3 6.625000
vigostral-7b-chat 6.475000
Mistral-7B-Instruct-v0.2 6.400000
French-Alpaca-7B-Instruct_beta 5.587866
vigogne-2-7b-chat 4.218750
vigogne-2-7b-instruct 3.698113
Usage
You can run this model using my Colab notebook
You can also run Chocolatine using the following code:
import transformers
from transformers import AutoTokenizer
# Format prompt
message = [
{"role": "system", "content": "You are a helpful assistant chatbot."},
{"role": "user", "content": "What is a Large Language Model?"}
]
tokenizer = AutoTokenizer.from_pretrained(new_model)
prompt = tokenizer.apply_chat_template(message, add_generation_prompt=True, tokenize=False)
# Create pipeline
pipeline = transformers.pipeline(
"text-generation",
model=new_model,
tokenizer=tokenizer
)
# Generate text
sequences = pipeline(
prompt,
do_sample=True,
temperature=0.7,
top_p=0.9,
num_return_sequences=1,
max_length=200,
)
print(sequences[0]['generated_text'])
Limitations
The Chocolatine model is a quick demonstration that a base model can be easily fine-tuned to achieve compelling performance.
It does not have any moderation mechanism.
- Developed by: Jonathan Pacifico, 2024
- Model type: LLM
- Language(s) (NLP): French, English
- License: MIT
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 27.63 |
IFEval (0-Shot) | 56.23 |
BBH (3-Shot) | 37.16 |
MATH Lvl 5 (4-Shot) | 14.50 |
GPQA (0-shot) | 9.62 |
MuSR (0-shot) | 15.10 |
MMLU-PRO (5-shot) | 33.21 |