michaelfeil
commited on
Commit
•
b0a9a41
1
Parent(s):
0d48dd0
Upload microsoft/phi-1 ctranslate2 weights
Browse files- README.md +169 -0
- Research License.docx +0 -0
- added_tokens.json +40 -0
- config.json +34 -0
- generation_config.json +4 -0
- merges.txt +0 -0
- model.bin +3 -0
- special_tokens_map.json +5 -0
- tokenizer.json +0 -0
- tokenizer_config.json +9 -0
- vocab.json +0 -0
- vocabulary.json +0 -0
- vocabulary.txt +0 -0
README.md
ADDED
@@ -0,0 +1,169 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: other
|
3 |
+
language:
|
4 |
+
- en
|
5 |
+
pipeline_tag: text-generation
|
6 |
+
tags:
|
7 |
+
- ctranslate2
|
8 |
+
- int8
|
9 |
+
- float16
|
10 |
+
- code
|
11 |
+
---
|
12 |
+
# # Fast-Inference with Ctranslate2
|
13 |
+
Speedup inference while reducing memory by 2x-4x using int8 inference in C++ on CPU or GPU.
|
14 |
+
|
15 |
+
quantized version of [microsoft/phi-1](https://huggingface.co/microsoft/phi-1)
|
16 |
+
```bash
|
17 |
+
pip install hf-hub-ctranslate2>=2.12.0 ctranslate2>=3.17.1
|
18 |
+
```
|
19 |
+
|
20 |
+
```python
|
21 |
+
# from transformers import AutoTokenizer
|
22 |
+
model_name = "michaelfeil/ct2fast-phi-1"
|
23 |
+
|
24 |
+
|
25 |
+
from hf_hub_ctranslate2 import GeneratorCT2fromHfHub
|
26 |
+
model = GeneratorCT2fromHfHub(
|
27 |
+
# load in int8 on CUDA
|
28 |
+
model_name_or_path=model_name,
|
29 |
+
device="cuda",
|
30 |
+
compute_type="int8_float16",
|
31 |
+
# tokenizer=AutoTokenizer.from_pretrained("{ORG}/{NAME}")
|
32 |
+
)
|
33 |
+
outputs = model.generate(
|
34 |
+
text=["def fibonnaci(", "User: How are you doing? Bot:"],
|
35 |
+
max_length=64,
|
36 |
+
include_prompt_in_result=False
|
37 |
+
)
|
38 |
+
print(outputs)
|
39 |
+
```
|
40 |
+
|
41 |
+
Checkpoint compatible to [ctranslate2>=3.22.0](https://github.com/OpenNMT/CTranslate2)
|
42 |
+
and [hf-hub-ctranslate2>=2.12.0](https://github.com/michaelfeil/hf-hub-ctranslate2)
|
43 |
+
- `compute_type=int8_float16` for `device="cuda"`
|
44 |
+
- `compute_type=int8` for `device="cpu"`
|
45 |
+
|
46 |
+
Converted on 2023-11-30 using
|
47 |
+
```
|
48 |
+
TransformersConverter(
|
49 |
+
"microsoft/phi-1",
|
50 |
+
activation_scales=None,
|
51 |
+
copy_files=['vocab.json', 'tokenizer.json', 'generation_config.json', 'README.md', 'special_tokens_map.json', 'merges.txt', 'Research License.docx', 'tokenizer_config.json', 'added_tokens.json', '.gitattributes'],
|
52 |
+
load_as_float16=True,
|
53 |
+
revision=None,
|
54 |
+
low_cpu_mem_usage=True,
|
55 |
+
trust_remote_code=True,
|
56 |
+
).convert(
|
57 |
+
output_dir=str(tmp_dir),
|
58 |
+
vmap = None,
|
59 |
+
quantization="int8_float16",
|
60 |
+
force = True,
|
61 |
+
)
|
62 |
+
|
63 |
+
```
|
64 |
+
|
65 |
+
# Licence and other remarks:
|
66 |
+
This is just a quantized version. Licence conditions are intended to be idential to original huggingface repo.
|
67 |
+
|
68 |
+
# Original description
|
69 |
+
|
70 |
+
## Model Summary
|
71 |
+
|
72 |
+
The language model phi-1 is a Transformer with 1.3 billion parameters, specialized for basic Python coding. Its training involved a variety of data sources, including subsets of Python codes from [The Stack v1.2](https://huggingface.co/datasets/bigcode/the-stack), Q&A content from [StackOverflow](https://archive.org/download/stackexchange), competition code from [code_contests](https://github.com/deepmind/code_contests), and synthetic Python textbooks and exercises generated by [gpt-3.5-turbo-0301](https://platform.openai.com/docs/models/gpt-3-5). Even though the model and the datasets are relatively small compared to contemporary Large Language Models (LLMs), phi-1 has demonstrated an impressive accuracy rate exceeding 50% on the simple Python coding benchmark, HumanEval.
|
73 |
+
|
74 |
+
## Intended Uses
|
75 |
+
Given the nature of the training data, phi-1 is best suited for prompts using the code format:
|
76 |
+
|
77 |
+
#### code format:
|
78 |
+
```python
|
79 |
+
def print_prime(n):
|
80 |
+
"""
|
81 |
+
Print all primes between 1 and n
|
82 |
+
"""
|
83 |
+
for num in range(2, n+1):
|
84 |
+
for i in range(2, num):
|
85 |
+
if num % i == 0:
|
86 |
+
break
|
87 |
+
else:
|
88 |
+
print(num)
|
89 |
+
```
|
90 |
+
where the model generates the code after the comments. (Note: This is a legitimate and correct use of the else statement in Python loops.)
|
91 |
+
|
92 |
+
**Notes**
|
93 |
+
* phi-1 is intended for research purposes. The model-generated code should be treated as a starting point rather than a definitive solution for potential use cases. Users should be cautious when employing this model in their applications.
|
94 |
+
* Direct adoption for production coding tasks is out of the scope of this research project. As a result, phi-1 has not been tested to ensure that it performs adequately for production-level code. Please refer to the limitation sections of this document for more details.
|
95 |
+
|
96 |
+
## Limitations of phi-1
|
97 |
+
|
98 |
+
* Limited Scope: 99.8% of the Python scripts in our fine-tuning dataset use only the packages "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages, we strongly recommend users manually verify all API uses.
|
99 |
+
* Replicate Scripts Online: As our model is trained on Python scripts found online, there is a small chance it may replicate such scripts, especially if they appear repetitively across different online sources.
|
100 |
+
* Generate Inaccurate Code: The model frequently generates incorrect code. We suggest that users view these outputs as a source of inspiration rather than definitive solutions.
|
101 |
+
* Unreliable Responses to Alternate Formats: Despite appearing to comprehend instructions in formats like Q&A or chat, our models often respond with inaccurate answers, even when seeming confident. Their capabilities with non-code formats are significantly more limited.
|
102 |
+
* Limitations on Natural Language Comprehension. As a coding bot, phi-1's main focus is to help with coding-related questions. While it may have some natural language comprehension capabilities, its primary function is not to engage in general conversations or demonstrate common sense like a general AI assistant. Its strength lies in providing assistance and guidance in the context of programming and software development.
|
103 |
+
* Potential Biases: phi-1, like other AI models, is trained on web and synthetic data. This data can contain biases and errors that might affect the AI's performance. Biases could stem from various sources like unbalanced representation, stereotypes, or controversial opinions present in the training data. As a result, the model might sometimes generate responses that reflect these biases or errors.
|
104 |
+
|
105 |
+
## Warning about Security Risks
|
106 |
+
When leveraging phi-1, it's paramount to be vigilant. The model, though powerful, can inadvertently introduce security vulnerabilities in the generated code. Examples include, but are not limited to:
|
107 |
+
|
108 |
+
* Directory Traversal: The code might fail to implement safe checks against directory traversal attacks, potentially allowing unauthorized access to sensitive files on your system.
|
109 |
+
* Injection Attacks: There could be lapses in escaping strings properly, making the application susceptible to SQL, OS commands, or other injection attacks.
|
110 |
+
* Misunderstanding Requirements: The model might sometimes misunderstand or oversimplify user requirements, leading to incomplete or insecure solutions.
|
111 |
+
* Lack of Input Validation: In some cases, the model might neglect to incorporate input validation or sanitize user inputs, opening doors to attacks like Cross-Site Scripting (XSS).
|
112 |
+
* Insecure Defaults: The model might recommend or generate code with insecure default settings, such as weak password requirements or unencrypted data transmissions.
|
113 |
+
* Failure in Error Handling: Improper error handling can inadvertently reveal sensitive information about the system or the application's internal workings.
|
114 |
+
|
115 |
+
Given these potential pitfalls, and others not explicitly mentioned, it's essential to thoroughly review, test, and verify the generated code before deploying it in any application, especially those that are security-sensitive. Always consult with security experts or perform rigorous penetration testing when in doubt.
|
116 |
+
|
117 |
+
## Training
|
118 |
+
### Model
|
119 |
+
* Architecture: a Transformer-based model with next-word prediction objective
|
120 |
+
* Training tokens: 54B tokens (7B unique tokens)
|
121 |
+
* Precision: fp16
|
122 |
+
* GPUs: 8 A100
|
123 |
+
* Training time: 6 days
|
124 |
+
|
125 |
+
### Software
|
126 |
+
* [PyTorch](https://github.com/pytorch/pytorch)
|
127 |
+
* [DeepSpeed](https://github.com/microsoft/DeepSpeed)
|
128 |
+
* [flash-attention](https://github.com/HazyResearch/flash-attention)
|
129 |
+
|
130 |
+
### License
|
131 |
+
The model is licensed under the [Research License](https://huggingface.co/microsoft/phi-1/resolve/main/Research%20License.docx).
|
132 |
+
|
133 |
+
### Sample Code
|
134 |
+
```python
|
135 |
+
import torch
|
136 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
137 |
+
|
138 |
+
torch.set_default_device("cuda")
|
139 |
+
model = AutoModelForCausalLM.from_pretrained("microsoft/phi-1", trust_remote_code=True)
|
140 |
+
tokenizer = AutoTokenizer.from_pretrained("microsoft/phi-1", trust_remote_code=True)
|
141 |
+
inputs = tokenizer('''def print_prime(n):
|
142 |
+
"""
|
143 |
+
Print all primes between 1 and n
|
144 |
+
"""''', return_tensors="pt", return_attention_mask=False)
|
145 |
+
|
146 |
+
outputs = model.generate(**inputs, max_length=200)
|
147 |
+
text = tokenizer.batch_decode(outputs)[0]
|
148 |
+
print(text)
|
149 |
+
```
|
150 |
+
|
151 |
+
If you need to use the model in a lower precision (e.g., FP16), please wrap the model's forward pass with `torch.autocast()`, as follows:
|
152 |
+
```python
|
153 |
+
with torch.autocast(model.device.type, dtype=torch.float16, enabled=True):
|
154 |
+
outputs = model.generate(**inputs, max_length=200)
|
155 |
+
```
|
156 |
+
|
157 |
+
**Remark.** In the generation function, our model currently does not support beam search (`num_beams` >1).
|
158 |
+
Furthermore, in the forward pass of the model, we currently do not support outputting hidden states or attention values, or using custom input embeddings (instead of the model's).
|
159 |
+
|
160 |
+
|
161 |
+
### Citation
|
162 |
+
```bib
|
163 |
+
@article{gunasekar2023textbooks,
|
164 |
+
title={Textbooks Are All You Need},
|
165 |
+
author={Gunasekar, Suriya and Zhang, Yi and Aneja, Jyoti and Mendes, Caio C{\'e}sar Teodoro and Del Giorno, Allie and Gopi, Sivakanth and Javaheripi, Mojan and Kauffmann, Piero and de Rosa, Gustavo and Saarikivi, Olli and others},
|
166 |
+
journal={arXiv preprint arXiv:2306.11644},
|
167 |
+
year={2023}
|
168 |
+
}
|
169 |
+
```
|
Research License.docx
ADDED
Binary file (38.9 kB). View file
|
|
added_tokens.json
ADDED
@@ -0,0 +1,40 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"\t\t": 50294,
|
3 |
+
"\t\t\t": 50293,
|
4 |
+
"\t\t\t\t": 50292,
|
5 |
+
"\t\t\t\t\t": 50291,
|
6 |
+
"\t\t\t\t\t\t": 50290,
|
7 |
+
"\t\t\t\t\t\t\t": 50289,
|
8 |
+
"\t\t\t\t\t\t\t\t": 50288,
|
9 |
+
"\t\t\t\t\t\t\t\t\t": 50287,
|
10 |
+
" ": 50286,
|
11 |
+
" ": 50285,
|
12 |
+
" ": 50284,
|
13 |
+
" ": 50283,
|
14 |
+
" ": 50282,
|
15 |
+
" ": 50281,
|
16 |
+
" ": 50280,
|
17 |
+
" ": 50279,
|
18 |
+
" ": 50278,
|
19 |
+
" ": 50277,
|
20 |
+
" ": 50276,
|
21 |
+
" ": 50275,
|
22 |
+
" ": 50274,
|
23 |
+
" ": 50273,
|
24 |
+
" ": 50272,
|
25 |
+
" ": 50271,
|
26 |
+
" ": 50270,
|
27 |
+
" ": 50269,
|
28 |
+
" ": 50268,
|
29 |
+
" ": 50267,
|
30 |
+
" ": 50266,
|
31 |
+
" ": 50265,
|
32 |
+
" ": 50264,
|
33 |
+
" ": 50263,
|
34 |
+
" ": 50262,
|
35 |
+
" ": 50261,
|
36 |
+
" ": 50260,
|
37 |
+
" ": 50259,
|
38 |
+
" ": 50258,
|
39 |
+
" ": 50257
|
40 |
+
}
|
config.json
ADDED
@@ -0,0 +1,34 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"_name_or_path": "microsoft/phi-1",
|
3 |
+
"activation_function": "gelu_new",
|
4 |
+
"architectures": [
|
5 |
+
"PhiForCausalLM"
|
6 |
+
],
|
7 |
+
"attn_pdrop": 0.0,
|
8 |
+
"auto_map": {
|
9 |
+
"AutoConfig": "configuration_phi.PhiConfig",
|
10 |
+
"AutoModelForCausalLM": "modeling_phi.PhiForCausalLM"
|
11 |
+
},
|
12 |
+
"embd_pdrop": 0.0,
|
13 |
+
"flash_attn": false,
|
14 |
+
"flash_rotary": false,
|
15 |
+
"fused_dense": false,
|
16 |
+
"initializer_range": 0.02,
|
17 |
+
"layer_norm_epsilon": null,
|
18 |
+
"model_type": "phi",
|
19 |
+
"n_embd": 2048,
|
20 |
+
"n_head": 32,
|
21 |
+
"n_head_kv": null,
|
22 |
+
"n_inner": null,
|
23 |
+
"n_layer": 24,
|
24 |
+
"n_positions": 2048,
|
25 |
+
"resid_pdrop": 0.0,
|
26 |
+
"rotary_dim": 32,
|
27 |
+
"tie_word_embeddings": false,
|
28 |
+
"torch_dtype": "float16",
|
29 |
+
"transformers_version": "4.34.1",
|
30 |
+
"vocab_size": 51200,
|
31 |
+
"bos_token": "<|endoftext|>",
|
32 |
+
"eos_token": "<|endoftext|>",
|
33 |
+
"unk_token": "<|endoftext|>"
|
34 |
+
}
|
generation_config.json
ADDED
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"_from_model_config": true,
|
3 |
+
"transformers_version": "4.32.1"
|
4 |
+
}
|
merges.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
model.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:d7842dd284f566043909e2c63f1fca4cff7e18f8dcd05cc3f67fc57031eac19a
|
3 |
+
size 1421069301
|
special_tokens_map.json
ADDED
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"bos_token": "<|endoftext|>",
|
3 |
+
"eos_token": "<|endoftext|>",
|
4 |
+
"unk_token": "<|endoftext|>"
|
5 |
+
}
|
tokenizer.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
tokenizer_config.json
ADDED
@@ -0,0 +1,9 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"add_prefix_space": false,
|
3 |
+
"bos_token": "<|endoftext|>",
|
4 |
+
"clean_up_tokenization_spaces": true,
|
5 |
+
"eos_token": "<|endoftext|>",
|
6 |
+
"model_max_length": 2048,
|
7 |
+
"tokenizer_class": "CodeGenTokenizer",
|
8 |
+
"unk_token": "<|endoftext|>"
|
9 |
+
}
|
vocab.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
vocabulary.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
vocabulary.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|