caliburn 12b-merged
This model is a 12 billion parameter language model created by merging multiple existing models using the MergeKit library. It is designed for general text generation tasks.
Model Details
Model Description
This is a large language model with 12 billion parameters, created by merging multiple pre-existing models using the MergeKit library. The model is based on the transformer architecture and is fine-tuned for general text generation tasks.
- Developed by: The user who created this merged model
- Model type: Transformer-based language model
- Language(s) (NLP): English
- License: MIT
- Finetuned from model: Multiple source models merged using MergeKit
Model Sources [optional]
- Repository: [More Information Needed]
- Paper [optional]: N/A
- Demo [optional]: N/A
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 22.68 |
IFEval (0-Shot) | 35.76 |
BBH (3-Shot) | 35.64 |
MATH Lvl 5 (4-Shot) | 9.67 |
GPQA (0-shot) | 11.52 |
MuSR (0-shot) | 13.78 |
MMLU-PRO (5-shot) | 29.72 |
Direct Use
This model can be used for various natural language processing tasks, including:
- Text generation
- Code completion
- Question answering
- Summarization
Downstream Use [optional]
The model can be fine-tuned for specific tasks or domains to improve performance on targeted applications.
Out-of-Scope Use
This model should not be used for generating harmful, biased, or unethical content. It should not be relied upon for critical decision-making without human oversight.
Bias, Risks, and Limitations
- The model may inherit biases present in its training data or source models.
- It may generate incorrect or nonsensical information.
- The model's outputs should be carefully reviewed and fact-checked.
Recommendations
Users should be aware of the model's limitations and potential biases. It's recommended to use the model with appropriate content filtering and human oversight, especially for public-facing applications.
How to Get Started with the Model
Use the following code to get started with the model:
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
tokenizer = AutoTokenizer.from_pretrained("./models/12b-merged")
model = AutoModelForCausalLM.from_pretrained("./models/12b-merged", torch_dtype=torch.float16).to("cuda")
prompt = "Your prompt here"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs.to("cuda"), max_new_tokens=100)
result = tokenizer.batch_decode(outputs, skip_special_tokens=True)
print(result)
- Downloads last month
- 7
Model tree for Xclbr7/caliburn-12b
Evaluation results
- strict accuracy on IFEval (0-Shot)Open LLM Leaderboard35.760
- normalized accuracy on BBH (3-Shot)Open LLM Leaderboard35.640
- exact match on MATH Lvl 5 (4-Shot)Open LLM Leaderboard9.670
- acc_norm on GPQA (0-shot)Open LLM Leaderboard11.520
- acc_norm on MuSR (0-shot)Open LLM Leaderboard13.780
- accuracy on MMLU-PRO (5-shot)test set Open LLM Leaderboard29.720