Taka008 commited on
Commit
fd53de4
·
verified ·
1 Parent(s): 0530290

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +158 -0
README.md ADDED
@@ -0,0 +1,158 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ - ja
6
+ programming_language:
7
+ - C
8
+ - C++
9
+ - C#
10
+ - Go
11
+ - Java
12
+ - JavaScript
13
+ - Lua
14
+ - PHP
15
+ - Python
16
+ - Ruby
17
+ - Rust
18
+ - Scala
19
+ - TypeScript
20
+ library_name: transformers
21
+ pipeline_tag: text-generation
22
+ inference: false
23
+ ---
24
+
25
+ # llm-jp-3-172b-alpha2-instruct
26
+
27
+ This repository provides large language models developed by the [Research and Development Center for Large Language Models](https://llmc.nii.ac.jp/) at the [National Institute of Informatics](https://www.nii.ac.jp/en/).
28
+
29
+ The development was partially supported by [GENIAC](https://www.meti.go.jp/policy/mono_info_service/geniac/index.html).
30
+
31
+ | Model Variants |
32
+ | :--- |
33
+ | [llm-jp-3-172b-alpha1](https://huggingface.co/llm-jp/llm-jp-3-172b-alpha1) |
34
+ | [llm-jp-3-172b-alpha1-instruct](https://huggingface.co/llm-jp/llm-jp-3-172b-alpha1-instruct) |
35
+ | [llm-jp-3-172b-alpha2](https://huggingface.co/llm-jp/llm-jp-3-172b-alpha2) |
36
+ | [llm-jp-3-172b-alpha2-instruct](https://huggingface.co/llm-jp/llm-jp-3-172b-alpha2-instruct) |
37
+ | [llm-jp-3-172b-beta1](https://huggingface.co/llm-jp/llm-jp-3-172b-beta1) |
38
+ | [llm-jp-3-172b-beta1-instruct](https://huggingface.co/llm-jp/llm-jp-3-172b-beta1-instruct) |
39
+
40
+
41
+ Checkpoints format: Hugging Face Transformers
42
+
43
+
44
+ ## Required Libraries and Their Versions
45
+
46
+ - torch>=2.3.0
47
+ - transformers>=4.40.1
48
+ - tokenizers>=0.19.1
49
+ - accelerate>=0.29.3
50
+ - flash-attn>=2.5.8
51
+
52
+ ## Usage
53
+
54
+ ```python
55
+ import torch
56
+ from transformers import AutoTokenizer, AutoModelForCausalLM
57
+ tokenizer = AutoTokenizer.from_pretrained("llm-jp/llm-jp-3-172b-alpha2-instruct")
58
+ model = AutoModelForCausalLM.from_pretrained("llm-jp/llm-jp-3-172b-alpha2-instruct", device_map="auto", torch_dtype=torch.bfloat16)
59
+ chat = [
60
+ {"role": "system", "content": "以下は、タスクを説明する指示です。要求を適切に満たす応答を書きなさい。"},
61
+ {"role": "user", "content": "自然言語処理とは何か"},
62
+ ]
63
+ tokenized_input = tokenizer.apply_chat_template(chat, add_generation_prompt=True, tokenize=True, return_tensors="pt").to(model.device)
64
+ with torch.no_grad():
65
+ output = model.generate(
66
+ tokenized_input,
67
+ max_new_tokens=100,
68
+ do_sample=True,
69
+ top_p=0.95,
70
+ temperature=0.7,
71
+ repetition_penalty=1.05,
72
+ )[0]
73
+ print(tokenizer.decode(output))
74
+ ```
75
+
76
+
77
+ ## Model Details
78
+
79
+ - **Model type:** Transformer-based Language Model
80
+ - **Total seen tokens:**:
81
+ - alpha1: 0.7T
82
+ - alpha2: 1.4T
83
+ - beta1: 0.7T
84
+
85
+
86
+ |Params|Layers|Hidden size|Heads|Context length|
87
+ |:---:|:---:|:---:|:---:|:---:|
88
+ |172b|96|12288|96|4096|
89
+
90
+ ## Tokenizer
91
+
92
+ The tokenizer of this model is based on [huggingface/tokenizers](https://github.com/huggingface/tokenizers) Unigram byte-fallback model.
93
+ The vocabulary entries were converted from [`llm-jp-tokenizer v3.0`](https://github.com/llm-jp/llm-jp-tokenizer/releases/tag/v3.0b2).
94
+ Please refer to [README.md](https://github.com/llm-jp/llm-jp-tokenizer) of `llm-jp-tokenizer` for details on the vocabulary construction procedure (the pure SentencePiece training does not reproduce our vocabulary).
95
+
96
+ ## Datasets
97
+
98
+ ### Pre-training
99
+
100
+ The models have been pre-trained using a blend of the following datasets.
101
+
102
+ | Language | Dataset | Tokens|
103
+ |:---|:---|---:|
104
+ |Japanese|[Wikipedia](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|2.6B
105
+ ||[Common Crawl](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|762.8B
106
+ ||[WARP/PDF](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|282.1B
107
+ ||[WARP/HTML](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|2.7B
108
+ ||[Kaken](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|1.8B
109
+ |English|[Wikipedia](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|4.7B
110
+ ||[Dolma/CC-head](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|608.5B
111
+ ||[Dolma/C4](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|181.6B
112
+ ||[Dolma/Reddit](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|83.1B
113
+ ||[Dolma/PeS2o](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|62.9B
114
+ ||[Dolma/Gutenberg](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|5.5B
115
+ ||[Dolma/Wiki](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|3.9B
116
+ |Code|[The Stack](https://huggingface.co/datasets/bigcode/the-stack)|114.1B
117
+ |Chinese|[Wikipedia](https://huggingface.co/datasets/bigcode/the-stack)|0.8B
118
+ |Korean|[Wikipedia](https://huggingface.co/datasets/bigcode/the-stack)|0.3B
119
+
120
+ ### Instruction tuning
121
+
122
+ The models have been fine-tuned on the following datasets.
123
+
124
+ | Language | Dataset | description |
125
+ |:---|:---|:---|
126
+ |Japanese|[ichikara-instruction-004-002](https://liat-aip.sakura.ne.jp/wp/llm%e3%81%ae%e3%81%9f%e3%82%81%e3%81%ae%e6%97%a5%e6%9c%ac%e8%aa%9e%e3%82%a4%e3%83%b3%e3%82%b9%e3%83%88%e3%83%a9%e3%82%af%e3%82%b7%e3%83%a7%e3%83%b3%e3%83%87%e3%83%bc%e3%82%bf%e4%bd%9c%e6%88%90/llm%e3%81%ae%e3%81%9f%e3%82%81%e3%81%ae%e6%97%a5%e6%9c%ac%e8%aa%9e%e3%82%a4%e3%83%b3%e3%82%b9%e3%83%88%e3%83%a9%e3%82%af%e3%82%b7%e3%83%a7%e3%83%b3%e3%83%87%e3%83%bc%e3%82%bf-%e5%85%ac%e9%96%8b/)| A manually constructed Japanese instruction dataset |
127
+ | |[answer-carefully-001](https://liat-aip.sakura.ne.jp/wp/answercarefully-dataset/)| A manually constructed Japanese instruction dataset focusing on LLMs' safety |
128
+ | |[databricks-dolly-15k-ja](https://huggingface.co/datasets/llm-jp/databricks-dolly-15k-ja)| [databricks-dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) translated into Japanese using DeepL |
129
+ | |[oasst1-21k-ja](https://huggingface.co/datasets/llm-jp/oasst1-21k-ja)| A subset of [oasst1](https://huggingface.co/datasets/OpenAssistant/oasst1) translated into Japanese using DeepL |
130
+ | |[oasst2-33k-ja](https://huggingface.co/datasets/llm-jp/oasst2-33k-ja)| A subset of [oasst2](https://huggingface.co/datasets/OpenAssistant/oasst2) translated into Japanese using DeepL |
131
+ | |aya-dataset-ja| A Japanese subset of [aya_dataset](https://huggingface.co/datasets/CohereForAI/aya_dataset) |
132
+ | |ichikara-instruction-format| A small amount of instruction dataset edited from ichikara-instruction, with some constraints on the output format. |
133
+ |English |[databricks-dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) | - |
134
+ | |[oasst1-21k-en](https://huggingface.co/datasets/llm-jp/oasst1-21k-en)| A subset of [oasst1](https://huggingface.co/datasets/OpenAssistant/oasst1) |
135
+ | |[oasst2-33k-en](https://huggingface.co/datasets/llm-jp/oasst2-33k-en)| A subset of [oasst2](https://huggingface.co/datasets/OpenAssistant/oasst2) |
136
+ | |[Daring-Anteater](https://huggingface.co/datasets/nvidia/Daring-Anteater)| - |
137
+ | |[FLAN](https://huggingface.co/datasets/Open-Orca/FLAN) | We used sampled one. |
138
+
139
+ ## Risks and Limitations
140
+
141
+ The models released here are in the early stages of our research and development and have not been tuned to ensure outputs align with human intent and safety considerations.
142
+
143
+
144
+ ## Send Questions to
145
+
146
+ llm-jp(at)nii.ac.jp
147
+
148
+
149
+ ## License
150
+
151
+ [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
152
+
153
+
154
+ ## Model Card Authors
155
+
156
+ *The names are listed in alphabetical order.*
157
+
158
+ Hirokazu Kiyomaru and Takashi Kodama.