Pankaj Mathur commited on
Commit
3f768b0
1 Parent(s): 0879a85

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +110 -0
README.md ADDED
@@ -0,0 +1,110 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ datasets:
3
+ - psmathur/orca_minis_uncensored_dataset
4
+ language:
5
+ - en
6
+ library_name: transformers
7
+ ---
8
+
9
+ # orca_mini_v3_13b
10
+
11
+ A Llama2-13b model trained on Orca Style datasets.
12
+
13
+ **I am actively seeking sponsorship and partnership opportunities. If you're interested, please connect with me at www.linkedin.com/in/pankajam.**
14
+
15
+ ## Evaluation
16
+
17
+ We evaluated orca_mini_v3_13b on a wide range of tasks using [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) from EleutherAI.
18
+
19
+ Here are the results on metrics used by [HuggingFaceH4 Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
20
+
21
+ |||||
22
+ |:------:|:--------:|:-------:|:--------:|
23
+ |**Task**|**Metric**|**Value**|**Stderr**|
24
+ |*arc_challenge*|acc_norm|0.5717|0.0145|
25
+ |*hellaswag*|acc_norm|0.7966|0.0043|
26
+ |*mmlu*|acc_norm|0.5234|0.035|
27
+ |*truthfulqa_mc*|mc2|0.5029|0.0156|
28
+ |**Total Average**|-|**0.59865**||
29
+
30
+
31
+ ## Example Usage
32
+
33
+ Here is the prompt format
34
+
35
+ ```
36
+ ### System:
37
+ You are an AI assistant that follows instruction extremely well. Help as much as you can.
38
+
39
+ ### User:
40
+ I want to build the best Large Language Model, Give me detail step by step instructions on how to do it?
41
+
42
+ ### Assistant:
43
+
44
+ ```
45
+
46
+ Below shows a code example on how to use this model
47
+
48
+ ```python
49
+ import torch
50
+ from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
51
+
52
+ tokenizer = AutoTokenizer.from_pretrained("psmathur/orca_mini_v3_13b", use_fast=False)
53
+ model = AutoModelForCausalLM.from_pretrained("psmathur/orca_mini_v3_13b", torch_dtype=torch.float16, low_cpu_mem_usage=True, device_map="auto")
54
+ system_prompt = "### System:\nYou are an AI assistant that follows instruction extremely well. Help as much as you can.\n\n"
55
+
56
+ #generate text steps
57
+ instruction = "I want to build the best Large Language Model, Give me detail step by step instructions on how to do it?"
58
+ prompt = f"{system_prompt}### User: {instruction}\n\n### Assistant:\n"
59
+ inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
60
+ output = model.generate(**inputs, do_sample=True, top_p=0.95, top_k=0, max_new_tokens=4096)
61
+
62
+ print(tokenizer.decode(output[0], skip_special_tokens=True))
63
+
64
+ ```
65
+
66
+
67
+ #### Limitations & Biases:
68
+
69
+ While this model aims for accuracy, it can occasionally produce inaccurate or misleading results.
70
+
71
+ Despite diligent efforts in refining the pretraining data, there remains a possibility for the generation of inappropriate, biased, or offensive content.
72
+
73
+ Exercise caution and cross-check information when necessary.
74
+
75
+
76
+
77
+ ### Citiation:
78
+
79
+ Please kindly cite using the following BibTeX:
80
+
81
+ ```
82
+ @misc{orca_mini_v3_13b,
83
+ author = {Pankaj Mathur},
84
+ title = {orca_mini_v3_13b: An explain tuned Llama2-13b model},
85
+ year = {2023},
86
+ publisher = {GitHub, HuggingFace},
87
+ journal = {GitHub repository, HuggingFace repository},
88
+ howpublished = {\url{https://https://huggingface.co/psmathur/orca_mini_v3_13b},
89
+ }
90
+ ```
91
+
92
+ ```
93
+ @misc{mukherjee2023orca,
94
+ title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4},
95
+ author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah},
96
+ year={2023},
97
+ eprint={2306.02707},
98
+ archivePrefix={arXiv},
99
+ primaryClass={cs.CL}
100
+ }
101
+ ```
102
+
103
+ ```
104
+ @software{touvron2023llama,
105
+ title={LLaMA: Open and Efficient Foundation Language Models},
106
+ author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and Rodriguez, Aurelien and Joulin, Armand and Grave, Edouard and Lample, Guillaume},
107
+ journal={arXiv preprint arXiv:2302.13971},
108
+ year={2023}
109
+ }
110
+ ```