Text Generation
Transformers
Safetensors
English
llama
nlp
llm
text-generation-inference
Inference Endpoints
mylibrar commited on
Commit
0b53500
1 Parent(s): 4690461
Files changed (1) hide show
  1. README.md +85 -0
README.md ADDED
@@ -0,0 +1,85 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - WizardLM/WizardLM_evol_instruct_V2_196k
5
+ - icybee/share_gpt_90k_v1
6
+ language:
7
+ - en
8
+ library_name: transformers
9
+ pipeline_tag: text-generation
10
+ tags:
11
+ - nlp
12
+ - llm
13
+ ---
14
+ # AmberChat
15
+
16
+
17
+ We present AmberChat, an instruction following model finetuned from [LLM360/Amber](https://huggingface.co/LLM360/Amber).
18
+
19
+ ## Model Description
20
+
21
+ - **Model type:** Language model with the same architecture as LLaMA-7B
22
+ - **Language(s) (NLP):** English
23
+ - **License:** Apache 2.0
24
+ - **Resources for more information:**
25
+ - [Research paper](https://arxiv.org/)
26
+ - [GitHub Repo](https://github.com/LLM360)
27
+ - [Amber pretraining data](https://huggingface.co/)
28
+
29
+
30
+ # Loading AmberChat
31
+
32
+ ```python
33
+ from transformers import LlamaTokenizer, LlamaForCausalLM
34
+
35
+ tokenizer = LlamaTokenizer.from_pretrained("LLM360/AmberChat")
36
+ model = LlamaForCausalLM.from_pretrained("LLM360/AmberChat")
37
+
38
+ input_text = "How old are you?"
39
+ input_ids = tokenizer(input_text, return_tensors="pt").input_ids
40
+
41
+ outputs = model.generate(input_ids)
42
+ print(tokenizer.decode(outputs[0]))
43
+ ```
44
+
45
+ # AmberChat Finetuning Details
46
+
47
+ ## DataMix
48
+ | Subset | Number of rows | License |
49
+ | ----------- | ----------- | ----------- |
50
+ | WizardLM/WizardLM_evol_instruct_V2_196k | 143k | |
51
+ | icybee/share_gpt_90k_v1 | 90k | cc0-1.0 |
52
+ | Total | 233k | |
53
+
54
+ ## Hyperparameters
55
+ | Hyperparameter | Value |
56
+ | ----------- | ----------- |
57
+ | Total Parameters | 6.7B |
58
+ | Hidden Size | 4096 |
59
+ | Intermediate Size (MLPs) | 11008 |
60
+ | Number of Attention Heads | 32 |
61
+ | Number of Hidden Lyaers | 32 |
62
+ | RMSNorm ɛ | 1e^-6 |
63
+ | Max Seq Length | 2048 |
64
+ | Vocab Size | 32000 |
65
+
66
+
67
+ # Evaluation
68
+
69
+ | Model | MT-Bench |
70
+ |------------------------------------------------------|------------------------------------------------------------|
71
+ | LLM360/Amber 359 | 2.48750 |
72
+ | **LLM360/AmberChat** | **5.428125** |
73
+
74
+ # Citation
75
+
76
+ **BibTeX:**
77
+
78
+ ```bibtex
79
+ @article{xxx,
80
+ title={XXX},
81
+ author={XXX},
82
+ journal={XXX},
83
+ year={2023}
84
+ }
85
+ ```