File size: 1,373 Bytes
938bce8
 
8f970b3
 
 
 
 
8f03f72
 
13a8fa3
16a62aa
 
 
8f03f72
 
938bce8
70abe95
0494330
10c6d27
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
---
license: apache-2.0
datasets:
- Skylion007/openwebtext
language:
- en
pipeline_tag: text-generation
inference:
  parameters:
    do_sample: True
    temperature: 0.5
    top_p: 0.5
    top_k: 50
    max_new_tokens: 15
    repetition_penalty: 1.176
---
A pre-trained language model, based on the Mistral 7B model, has been scaled down to approximately 248 million parameters. Currently, this model has been trained on 2,120,000 examples. The batch size will remain low for future epochs. This model isn't intended for direct use but for fine-tuning on a downstream task.

During evaluation on InstructMix, this model achieved an average perplexity score of 6.3. More training sessions are planned for this model.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Locutusque__TinyMistral-248m)

| Metric                | Value                     |
|-----------------------|---------------------------|
| Avg.                  | 24.18   |
| ARC (25-shot)         | 20.82          |
| HellaSwag (10-shot)   | 26.98    |
| MMLU (5-shot)         | 23.11         |
| TruthfulQA (0-shot)   | 46.89   |
| Winogrande (5-shot)   | 50.75   |
| GSM8K (5-shot)        | 0.0        |
| DROP (3-shot)         | 0.74         |