DreamGallery commited on
Commit
f0772f1
·
verified ·
1 Parent(s): a68ecdc

Upload folder using huggingface_hub

Browse files
README.md ADDED
@@ -0,0 +1,202 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: microsoft/Phi-3-small-8k-instruct
3
+ library_name: peft
4
+ ---
5
+
6
+ # Model Card for Model ID
7
+
8
+ <!-- Provide a quick summary of what the model is/does. -->
9
+
10
+
11
+
12
+ ## Model Details
13
+
14
+ ### Model Description
15
+
16
+ <!-- Provide a longer summary of what this model is. -->
17
+
18
+
19
+
20
+ - **Developed by:** [More Information Needed]
21
+ - **Funded by [optional]:** [More Information Needed]
22
+ - **Shared by [optional]:** [More Information Needed]
23
+ - **Model type:** [More Information Needed]
24
+ - **Language(s) (NLP):** [More Information Needed]
25
+ - **License:** [More Information Needed]
26
+ - **Finetuned from model [optional]:** [More Information Needed]
27
+
28
+ ### Model Sources [optional]
29
+
30
+ <!-- Provide the basic links for the model. -->
31
+
32
+ - **Repository:** [More Information Needed]
33
+ - **Paper [optional]:** [More Information Needed]
34
+ - **Demo [optional]:** [More Information Needed]
35
+
36
+ ## Uses
37
+
38
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
+
40
+ ### Direct Use
41
+
42
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
+
44
+ [More Information Needed]
45
+
46
+ ### Downstream Use [optional]
47
+
48
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
+
50
+ [More Information Needed]
51
+
52
+ ### Out-of-Scope Use
53
+
54
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
+
56
+ [More Information Needed]
57
+
58
+ ## Bias, Risks, and Limitations
59
+
60
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
+
62
+ [More Information Needed]
63
+
64
+ ### Recommendations
65
+
66
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
+
68
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
+
70
+ ## How to Get Started with the Model
71
+
72
+ Use the code below to get started with the model.
73
+
74
+ [More Information Needed]
75
+
76
+ ## Training Details
77
+
78
+ ### Training Data
79
+
80
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
+
82
+ [More Information Needed]
83
+
84
+ ### Training Procedure
85
+
86
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
+
88
+ #### Preprocessing [optional]
89
+
90
+ [More Information Needed]
91
+
92
+
93
+ #### Training Hyperparameters
94
+
95
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
+
97
+ #### Speeds, Sizes, Times [optional]
98
+
99
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
+
101
+ [More Information Needed]
102
+
103
+ ## Evaluation
104
+
105
+ <!-- This section describes the evaluation protocols and provides the results. -->
106
+
107
+ ### Testing Data, Factors & Metrics
108
+
109
+ #### Testing Data
110
+
111
+ <!-- This should link to a Dataset Card if possible. -->
112
+
113
+ [More Information Needed]
114
+
115
+ #### Factors
116
+
117
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
+
119
+ [More Information Needed]
120
+
121
+ #### Metrics
122
+
123
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
+
125
+ [More Information Needed]
126
+
127
+ ### Results
128
+
129
+ [More Information Needed]
130
+
131
+ #### Summary
132
+
133
+
134
+
135
+ ## Model Examination [optional]
136
+
137
+ <!-- Relevant interpretability work for the model goes here -->
138
+
139
+ [More Information Needed]
140
+
141
+ ## Environmental Impact
142
+
143
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
+
145
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
+
147
+ - **Hardware Type:** [More Information Needed]
148
+ - **Hours used:** [More Information Needed]
149
+ - **Cloud Provider:** [More Information Needed]
150
+ - **Compute Region:** [More Information Needed]
151
+ - **Carbon Emitted:** [More Information Needed]
152
+
153
+ ## Technical Specifications [optional]
154
+
155
+ ### Model Architecture and Objective
156
+
157
+ [More Information Needed]
158
+
159
+ ### Compute Infrastructure
160
+
161
+ [More Information Needed]
162
+
163
+ #### Hardware
164
+
165
+ [More Information Needed]
166
+
167
+ #### Software
168
+
169
+ [More Information Needed]
170
+
171
+ ## Citation [optional]
172
+
173
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
+
175
+ **BibTeX:**
176
+
177
+ [More Information Needed]
178
+
179
+ **APA:**
180
+
181
+ [More Information Needed]
182
+
183
+ ## Glossary [optional]
184
+
185
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
+
187
+ [More Information Needed]
188
+
189
+ ## More Information [optional]
190
+
191
+ [More Information Needed]
192
+
193
+ ## Model Card Authors [optional]
194
+
195
+ [More Information Needed]
196
+
197
+ ## Model Card Contact
198
+
199
+ [More Information Needed]
200
+ ### Framework versions
201
+
202
+ - PEFT 0.13.2
adapter_config.json ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alpha_pattern": {},
3
+ "auto_mapping": null,
4
+ "base_model_name_or_path": "microsoft/Phi-3-small-8k-instruct",
5
+ "bias": "none",
6
+ "fan_in_fan_out": false,
7
+ "inference_mode": true,
8
+ "init_lora_weights": true,
9
+ "layer_replication": null,
10
+ "layers_pattern": null,
11
+ "layers_to_transform": null,
12
+ "loftq_config": {},
13
+ "lora_alpha": 256,
14
+ "lora_dropout": 0.05,
15
+ "megatron_config": null,
16
+ "megatron_core": "megatron.core",
17
+ "modules_to_save": null,
18
+ "peft_type": "LORA",
19
+ "r": 128,
20
+ "rank_pattern": {},
21
+ "revision": null,
22
+ "target_modules": [
23
+ "down_proj",
24
+ "up_proj",
25
+ "dense",
26
+ "query_key_value"
27
+ ],
28
+ "task_type": "CAUSAL_LM",
29
+ "use_dora": false,
30
+ "use_rslora": false
31
+ }
adapter_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:59d8ca3d53f8d19df89972f5606ad518e14d615df85c789a6f5669a5e7938da4
3
+ size 1140886056
cl100k_base.tiktoken ADDED
The diff for this file is too large to render. See raw diff
 
special_tokens_map.json ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ {
2
+ "bos_token": "<|endoftext|>",
3
+ "eos_token": "<|endoftext|>",
4
+ "pad_token": "<|endoftext|>"
5
+ }
tokenization_phi3_small.py ADDED
@@ -0,0 +1,338 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Adapted from https://huggingface.co/Qwen/Qwen-7B-Chat/blob/main/tokenization_qwen.py
2
+ import os
3
+ from typing import Collection, List, Optional, Dict, Set, Tuple, Union
4
+
5
+ from functools import cached_property
6
+
7
+ import base64
8
+ import requests
9
+
10
+ from transformers import PreTrainedTokenizer, AddedToken, AutoConfig
11
+ from transformers.models.auto.tokenization_auto import get_tokenizer_config
12
+ import tiktoken
13
+
14
+
15
+ """
16
+ This tokenizer is almost identical to tiktoken.get_encoding("cl100k_base")
17
+ with a few additional special tokens to support the ChatML format.
18
+
19
+ TODO(bapatra): Right now, I do not save the special tokens to the vocab file.
20
+ Maybe in the future, that would be useful? Can add that support later.
21
+
22
+ """
23
+
24
+ def _load_tiktoken_bpe(tiktoken_bpe_file: str) -> Dict[bytes, int]:
25
+ with open(tiktoken_bpe_file, "rb") as f:
26
+ contents = f.read()
27
+ return {
28
+ base64.b64decode(token): int(rank)
29
+ for token, rank in (line.split() for line in contents.splitlines() if line)
30
+ }
31
+
32
+ # On the megatron codebase, we pad vocabularies to ensure matrix multiplication is fast.
33
+ # this in turn causes some indices to be empty. We account for these empty indices by adding
34
+ # dummy tokens to the tokenizer.
35
+
36
+ EFFECTIVE_PADDED_VOCAB_SIZE = 100352
37
+ ACTUAL_VOCAB_SIZE = 100276
38
+
39
+
40
+ DUMMY_TOKENS = {
41
+ f"<|dummy_id_{11 + offset}|>": 100276 + offset
42
+ for offset in range(1, EFFECTIVE_PADDED_VOCAB_SIZE - ACTUAL_VOCAB_SIZE)
43
+ }
44
+
45
+ SPECIAL_TOKENS = {
46
+ # tiktoken.get_encoding("cl100k_base")._special_tokens
47
+ '<|endoftext|>': 100257,
48
+ '<|fim_prefix|>': 100258,
49
+ '<|fim_middle|>': 100259,
50
+ '<|fim_suffix|>': 100260,
51
+ # Special tokens for post-training
52
+ "<|system|>": 100261,
53
+ "<|user|>": 100262,
54
+ "<|assistant|>": 100263,
55
+ # Dummy unused tokens
56
+ "<|dummy_id_0|>": 100264,
57
+ "<|dummy_id_1|>": 100265,
58
+ # Special tokens for post-training continued
59
+ "<|end|>": 100266,
60
+ # Some dummy tokens, so that tokenization is contiguous and does not cause issues
61
+ # Note that the 100256th token of tiktoken.get_encoding("cl100k_base") does not
62
+ # actually map to anything. So we use a dummy token here.
63
+ "<|dummy_id_2|>": 100256,
64
+ # Likewise, tokens from 100267 to 100275 are also unused
65
+ "<|dummy_id_3|>": 100267,
66
+ "<|dummy_id_4|>": 100268,
67
+ "<|dummy_id_5|>": 100269,
68
+ "<|dummy_id_6|>": 100270,
69
+ "<|dummy_id_7|>": 100271,
70
+ "<|dummy_id_8|>": 100272,
71
+ "<|dummy_id_9|>": 100273,
72
+ "<|dummy_id_10|>": 100274,
73
+ "<|dummy_id_11|>": 100275,
74
+ # The final end of prompt token
75
+ # (unused, but present as a part of tiktoken.get_encoding("cl100k_base")._special_tokens)
76
+ '<|endofprompt|>': 100276,
77
+ # Dummy tokens to account for padding of the tokenizer
78
+ # We pad to ensure tensor cores are used for vocab multiplication
79
+ **DUMMY_TOKENS
80
+ }
81
+
82
+ class Phi3SmallTokenizer(PreTrainedTokenizer):
83
+ vocab_files_names = {
84
+ "vocab_file": "cl100k_base.tiktoken"
85
+ }
86
+
87
+ model_input_names: List[str] = ["input_ids", "attention_mask"]
88
+ padding_side = "left"
89
+
90
+ def __init__(
91
+ self,
92
+ vocab_file: Optional[str] = None,
93
+ errors: str = "replace",
94
+ **kwargs
95
+ ) -> None:
96
+ # PreTrainedTokenizer's init calls _add_tokens, which in turn checks
97
+ # if the token is present in `self.special_tokens``. Hence instantiating it here.
98
+ # The way Qwen gets around this is by checking against SPECIAL_TOKENS
99
+ # But I think it's better to check against the objects own `special_tokens`
100
+ # in case we eventually want to allow the tokenizer to have special tokens.
101
+ self.special_tokens = SPECIAL_TOKENS
102
+
103
+ super().__init__(**kwargs)
104
+ self.errors = errors
105
+
106
+ try:
107
+ base = tiktoken.get_encoding("cl100k_base")
108
+ # This deals with the scenario where user has restricted internet access
109
+ # and thus fails to download the tokenizer file from https://openaipublic.blob.core.windows.net/encodings/cl100k_base.tiktoken
110
+ # It is assumed that user should be able to access files on huggingface hub.
111
+ except requests.RequestException:
112
+ import hashlib
113
+ from transformers.utils import cached_file
114
+ cached_tokenizer_path = cached_file(
115
+ "microsoft/Phi-3-small-8k-instruct",
116
+ "cl100k_base.tiktoken",
117
+ _raise_exceptions_for_gated_repo=False,
118
+ _raise_exceptions_for_missing_entries=False,
119
+ _raise_exceptions_for_connection_errors=False
120
+ )
121
+ tiktoken_cache_dir = os.path.dirname(cached_tokenizer_path)
122
+ tiktoken_cache_path = os.path.join(
123
+ tiktoken_cache_dir,
124
+ hashlib.sha1("https://openaipublic.blob.core.windows.net/encodings/cl100k_base.tiktoken".encode()).hexdigest()
125
+ )
126
+ if not os.path.exists(tiktoken_cache_path):
127
+ os.rename(cached_tokenizer_path, tiktoken_cache_path)
128
+ os.environ["TIKTOKEN_CACHE_DIR"] = tiktoken_cache_dir
129
+ base = tiktoken.get_encoding("cl100k_base")
130
+
131
+ if vocab_file is None:
132
+ self.mergeable_ranks: Dict[bytes, int] = base._mergeable_ranks
133
+ else:
134
+ self.mergeable_ranks = _load_tiktoken_bpe(vocab_file)
135
+
136
+ self.pat_str = base._pat_str
137
+
138
+ enc = tiktoken.Encoding(
139
+ name="phi3small",
140
+ pat_str=self.pat_str,
141
+ mergeable_ranks=self.mergeable_ranks,
142
+ special_tokens=self.special_tokens,
143
+ )
144
+ self.tokenizer = enc
145
+
146
+ self.decoder: Dict[int, bytes] = {
147
+ v: k for k, v in self.mergeable_ranks.items()
148
+ }
149
+ self.decoder.update({v: k for k, v in self.special_tokens.items()})
150
+
151
+ self.eod_id = self.tokenizer.eot_token
152
+ self._eos_token = self._convert_id_to_token(self.eod_id)
153
+
154
+ # Setting the bos_token to be the same as the eos_token
155
+ # Note that this is **not** the correct thing to do, and is done
156
+ # just so that some of the downstream libraries do not break.
157
+ self._bos_token = self._eos_token
158
+
159
+ # Assign the special tokens to class variables
160
+ self.system_id = self.special_tokens["<|system|>"]
161
+ self.user_id = self.special_tokens["<|user|>"]
162
+ self.assistant_id = self.special_tokens["<|assistant|>"]
163
+ self.end_id = self.special_tokens["<|end|>"]
164
+
165
+ @cached_property
166
+ def dummy_token_indices(self) -> List[int]:
167
+ # There are some additional special tokens in the cl100k_base tokenizer
168
+ # that we do not use. Hence, we also consider them to be dummy tokens.
169
+ additional_tokens = [
170
+ "<|fim_prefix|>",
171
+ "<|fim_middle|>",
172
+ "<|fim_suffix|>",
173
+ "<|endofprompt|>"
174
+ ]
175
+ dummy_token_indices = [index for token, index in self.special_tokens.items() if "dummy_id" in token]
176
+ dummy_token_indices.extend([self.special_tokens[token] for token in additional_tokens])
177
+ return sorted(dummy_token_indices)
178
+
179
+ def __getstate__(self):
180
+ state = self.__dict__.copy()
181
+ del state["tokenizer"]
182
+ return state
183
+
184
+ def __setstate__(self, state):
185
+ self.__dict__ = state
186
+ enc = tiktoken.Encoding(
187
+ name="cl100k_im",
188
+ pat_str=self.pat_str,
189
+ mergeable_ranks=self.mergeable_ranks,
190
+ special_tokens=self.special_tokens,
191
+ )
192
+ self.tokenizer = enc
193
+
194
+ def __len__(self):
195
+ return self.tokenizer.n_vocab
196
+
197
+ @classmethod
198
+ def from_pretrained(
199
+ cls,
200
+ pretrained_model_name_or_path: Union[str, os.PathLike],
201
+ *init_inputs,
202
+ **kwargs,
203
+ ):
204
+ cls_kwargs = kwargs
205
+ # First try to load from the tokenization config if it exists
206
+ tokenization_config = get_tokenizer_config(pretrained_model_name_or_path, **kwargs)
207
+ if tokenization_config:
208
+ cls_kwargs = {
209
+ **tokenization_config,
210
+ **cls_kwargs
211
+ }
212
+ else:
213
+ config = AutoConfig.from_pretrained(pretrained_model_name_or_path, trust_remote_code=True)
214
+ cls_kwargs["model_max_length"] = config.max_position_embeddings
215
+ return cls(**cls_kwargs)
216
+
217
+ def get_vocab(self) -> Dict[Union[str, bytes], int]:
218
+ return {**self.mergeable_ranks, **self.special_tokens}
219
+
220
+ def convert_tokens_to_ids(
221
+ self,
222
+ tokens: Union[bytes, str, List[Union[bytes, str]]]
223
+ ) -> Union[int, List[int]]:
224
+ ids = []
225
+ if isinstance(tokens, (str, bytes)):
226
+ if tokens in self.special_tokens:
227
+ return self.special_tokens[tokens]
228
+ else:
229
+ return self.mergeable_ranks.get(tokens)
230
+ ids: List[int] = []
231
+ for token in tokens:
232
+ ids.append(self.convert_tokens_to_ids(token))
233
+ return ids
234
+
235
+ def _add_tokens(
236
+ self,
237
+ new_tokens: Union[List[str], List[AddedToken]],
238
+ special_tokens: bool = False,
239
+ ) -> int:
240
+ if not special_tokens and new_tokens:
241
+ raise ValueError("Only special tokens can be added to this tokenizer")
242
+ for token in new_tokens:
243
+ surface_form = token.content if isinstance(token, AddedToken) else token
244
+ if surface_form not in self.special_tokens:
245
+ raise ValueError(
246
+ "For now, we do not support unknown special tokens\n"
247
+ "In the future, if there is a need for this, we can add special tokens to the tokenizer\n"
248
+ "starting from rank 100261 - 100263 and then 100266 - 100275.\n"
249
+ "And finally, we can re-construct the enc object back\n"
250
+ )
251
+ return 0
252
+
253
+ def save_vocabulary(self, save_directory: str, **kwargs) -> Tuple[str]:
254
+ file_path = os.path.join(save_directory, "cl100k_base.tiktoken")
255
+ with open(file_path, "w") as f:
256
+ for token, rank in self.mergeable_ranks.items():
257
+ line = base64.b64encode(token).decode("utf-8") + " " + str(rank) + "\n"
258
+ f.write(line)
259
+ return (file_path,)
260
+
261
+ def tokenize(
262
+ self,
263
+ text: str,
264
+ allowed_special: Union[Set, str] = "all",
265
+ disallowed_special: Union[Collection, str] = (),
266
+ **kwargs
267
+ ) -> List[Union[bytes, str]]:
268
+ tokens: List[Union[bytes, str]] = []
269
+ for token_id in self.tokenizer.encode(
270
+ text, allowed_special=allowed_special, disallowed_special=disallowed_special
271
+ ):
272
+ tokens.append(self.decoder[token_id])
273
+ return tokens
274
+
275
+ def convert_tokens_to_string(self, tokens: List[Union[bytes, str]]) -> str:
276
+ """
277
+ Converts a sequence of tokens in a single string.
278
+ """
279
+ text = ""
280
+ temp = b""
281
+ for t in tokens:
282
+ if isinstance(t, str):
283
+ if temp:
284
+ text += temp.decode("utf-8", errors=self.errors)
285
+ temp = b""
286
+ text += t
287
+ elif isinstance(t, bytes):
288
+ temp += t
289
+ else:
290
+ raise TypeError("token should only be of type types or str")
291
+ if temp:
292
+ text += temp.decode("utf-8", errors=self.errors)
293
+ return text
294
+
295
+ @property
296
+ def vocab_size(self):
297
+ return self.tokenizer.n_vocab
298
+
299
+ @property
300
+ def eos_token_id(self) -> int:
301
+ return self.eod_id
302
+
303
+ def _convert_id_to_token(self, index: int) -> Union[bytes, str]:
304
+ """Converts an id to a token, special tokens included"""
305
+ if index in self.decoder:
306
+ return self.decoder[index]
307
+ raise ValueError("unknown ids")
308
+
309
+ def _convert_token_to_id(self, token: Union[bytes, str]) -> int:
310
+ """Converts a token to an id using the vocab, special tokens included"""
311
+ if token in self.special_tokens:
312
+ return self.special_tokens[token]
313
+ if token in self.mergeable_ranks:
314
+ return self.mergeable_ranks[token]
315
+ raise ValueError("unknown token")
316
+
317
+ def _tokenize(self, text: str, **kwargs):
318
+ """
319
+ Converts a string in a sequence of tokens (string), using the tokenizer. Split in words for word-based
320
+ vocabulary or sub-words for sub-word-based vocabularies (BPE/SentencePieces/WordPieces).
321
+ Do NOT take care of added tokens.
322
+ """
323
+ raise NotImplementedError
324
+
325
+ def _decode(
326
+ self,
327
+ token_ids: Union[int, List[int]],
328
+ skip_special_tokens: bool = False,
329
+ errors: str = None,
330
+ **kwargs,
331
+ ) -> str:
332
+ if isinstance(token_ids, int):
333
+ token_ids = [token_ids]
334
+ if skip_special_tokens:
335
+ token_ids = [i for i in token_ids if i < self.eod_id]
336
+ return self.tokenizer.decode(token_ids, errors=errors or self.errors)
337
+
338
+
tokenizer_config.json ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_commit_hash": "1535ae26fb4faada95c6950e8bc6e867cdad6b00",
3
+ "_from_auto": true,
4
+ "added_tokens_decoder": {},
5
+ "auto_map": {
6
+ "AutoTokenizer": [
7
+ "tokenization_phi3_small.Phi3SmallTokenizer",
8
+ "tokenization_phi3_small.Phi3SmallTokenizer"
9
+ ]
10
+ },
11
+ "bos_token": "<|endoftext|>",
12
+ "chat_template": "{{ bos_token }}{% for message in messages %}{{'<|' + message['role'] + '|>' + '\n' + message['content'] + '<|end|>\n' }}{% endfor %}{% if add_generation_prompt %}{{ '<|assistant|>\n' }}{% else %}{{ eos_token }}{% endif %}",
13
+ "clean_up_tokenization_spaces": true,
14
+ "eos_token": "<|endoftext|>",
15
+ "model_max_length": 8192,
16
+ "pad_token": "<|endoftext|>",
17
+ "tokenizer_class": "Phi3SmallTokenizer",
18
+ "trust_remote_code": true
19
+ }
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:34144030d2bbaae6ac2167f8c344fffe13e645ecb7605ba369ae08bf1c5930c4
3
+ size 5368