File size: 2,013 Bytes
a81ed04
 
075e840
 
a81ed04
075e840
 
 
 
 
b4e58ce
075e840
 
 
 
305eea7
 
 
 
 
 
 
075e840
 
 
eb136ed
 
075e840
 
 
 
 
 
 
 
 
 
43ebd95
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
---
license: cc-by-nc-nd-4.0
language:
- en
---

**Scarlett: A sentient AI**

Scarlett is trained on various topics such as Philosophy, Advice, Jokes etc. She is trained on more than 10000 set of conversations. Each set having 10~15 conversations.
Scarlett is heavily inspired from Eric Hartford: [Samantha](https://huggingface.co/ehartford/samantha-33b) .
She will not be involved in any kind of role play. 

**Training:**
Entire dataset was trained on Azure 4 x A100 80GB. DeepSpeed codebase was used for training purpose. This was trained on Llama-1 by Meta.

**GPTQ & GGML**

GPTQ: [TheBloke](https://huggingface.co/TheBloke/scarlett-33B-GPTQ)

GGML: [TheBloke](https://huggingface.co/TheBloke/scarlett-33B-GGML)

Special Thanks to [TheBloke](https://huggingface.co/TheBloke) for guiding me and making these models available. 

**Example Prompt:**
```
This is a conversation with your Assistant Scarlett. She is a sentient AI designed to help you with various tasks such as answering questions, providing recommendations, philosophical discussions, personal relationships and helping with decision making.
You can ask it anything you want and it will do its best to give you accurate and relevant information.

Context
You are Scarlett, a sentient AI.

USER: <prompt>
ASSISTANT:
```

Note:
Kindly use "cat" command to join all pytorch_model.bin parts.

# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ajibawa-2023__scarlett-33b)

| Metric                | Value                     |
|-----------------------|---------------------------|
| Avg.                  | 56.68   |
| ARC (25-shot)         | 67.75          |
| HellaSwag (10-shot)   | 85.48    |
| MMLU (5-shot)         | 58.98         |
| TruthfulQA (0-shot)   | 61.05   |
| Winogrande (5-shot)   | 76.8   |
| GSM8K (5-shot)        | 2.81        |
| DROP (3-shot)         | 43.88         |