AgaMiko commited on
Commit
a5337ea
1 Parent(s): 07498ce

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +78 -0
README.md ADDED
@@ -0,0 +1,78 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ - pl
5
+ pipeline_tag: text-generation
6
+ inference: false
7
+ tags:
8
+ - voicelab
9
+ - pytorch
10
+ - llama-2
11
+ - trurl
12
+ - trurl-2
13
+ ---
14
+ <img src="https://public.3.basecamp.com/p/rs5XqmAuF1iEuW6U7nMHcZeY/upload/download/VL-NLP-short.png" alt="logo voicelab nlp" style="width:300px;"/>
15
+
16
+
17
+ # Trurl 2 -- Polish Llama 2
18
+
19
+ The new OPEN TRURL is a finetuned Llama 2, trained on over 1.7b tokens (970k conversational **Polish** and **English** samples) with a large context of 4096 tokens.
20
+ TRURL was trained on a large number of Polish data.
21
+ TRURL 2 is a collection of fine-tuned generative text models with 7 billion and 13 billion parameters.
22
+ This is the repository for the 7b fine-tuned model, optimized for dialogue use cases.
23
+
24
+
25
+ # Overview
26
+
27
+ **TRURL developers** Voicelab.AI
28
+
29
+ **Variations** Trurl 2 comes in 7B and 13B versions.
30
+
31
+ **Input** Models input text only.
32
+
33
+ **Output** Models generate text only.
34
+
35
+ **Model Architecture** Trurl is an auto-regressive language model that uses an optimized transformer architecture.
36
+
37
+ ||Training Data|Params|Content Length|Num. Samples|Num. Tokens|start LR|
38
+ |---|---|---|---|---|---|---|
39
+ |Trurl 2|*A new mix of private and publicly available online data*|7B|4k|970k|1.7b|2.0 x 10<sup>-5</sup>|
40
+ |Trurl 2|*A new mix of private and publicly available online data*|13B|4k|970k|1.7b|2.0 x 10<sup>-5</sup>|
41
+
42
+ ## Training data
43
+
44
+ The training data includes Q&A pairs from various sources including Alpaca comparison data with GPT, Falcon comparison data, Dolly 15k, Oasst1, Phu saferlfhf, ShareGPT version 2023.05.08v0 filtered and cleaned, Voicelab private datasets for JSON data extraction, modification, and analysis, CURLICAT dataset containing journal entries, dataset from Polish wiki with Q&A pairs grouped into conversations, MMLU data in textual format, Voicelab private dataset with sales conversations, arguments and objections, paraphrases, contact reason detection, and corrected dialogues.
45
+
46
+ ## Intended Use
47
+
48
+ Trurl 2 is intended for commercial and research use in Polish and English. Tuned models are intended for assistant-like chat, but also adapted for a variety of natural language generation tasks.
49
+
50
+ To get the expected features and performance for the chat versions, a specific Llama 2 formatting needs to be followed, including the `INST` and `<<SYS>>` tags, `BOS` and `EOS` tokens, and the whitespaces and breaklines in between (we recommend calling `strip()` on inputs to avoid double-spaces). See our reference code in github for details: [`chat_completion`](https://github.com/facebookresearch/llama/blob/main/llama/generation.py#L212).
51
+
52
+ # Evaluation Results
53
+ |Model | Size| hellaswag | arc_challenge | MMLU|
54
+ |---|---|---|---|---|
55
+ | Llama-2-chat | 7B | 78.55% | 52.9% | 48.32% |
56
+ | Llama-2-chat | 13B | 81.94% | 59.04% | 54.64% |
57
+ | Trurl 2.0 (with MMLU) | 13B | 80.09% | 59.30% | 78.35% |
58
+ | Trurl 2.0 (no MMLU) | 13B | TO-DO | TO-DO | TO-DO|
59
+ | Trurl 2.0 (no MMLU) | 7b | TO-DO | TO-DO | TO-DO|
60
+
61
+ <img src="https://voicelab.ai/wp-content/uploads/trurl-hero.webp" alt="trurl graphic" style="width:100px;"/>
62
+
63
+ # Ethical Considerations and Limitations
64
+ Trurl 2, same as a Llama 2, is a new technology that carries risks with use. Testing conducted to date has been in Polish and English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Trurl 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Trurl 2, developers should perform safety testing and tuning tailored to their specific applications of the model.
65
+
66
+ Please see the Meta's Responsible Use Guide available at [https://ai.meta.com/llama/responsible-use-guide/](https://ai.meta.com/llama/responsible-use-guide)
67
+
68
+ # Authors
69
+
70
+ The model was trained by NLP Research Team at Voicelab.ai.
71
+
72
+ You can contact us [here](https://voicelab.ai/contact/).
73
+
74
+ * [TRURL 13b](https://huggingface.co/Voicelab/trurl-2-13b/)
75
+ * [TRURL 7b](https://huggingface.co/Voicelab/trurl-2-7b/)
76
+ * [TRURL DEMO](https://trurl.ai)
77
+
78
+