dad1909 commited on
Commit
3269212
·
verified ·
1 Parent(s): ea6ac9a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +164 -6
README.md CHANGED
@@ -1,6 +1,7 @@
1
  ---
2
  language:
3
  - en
 
4
  license: apache-2.0
5
  tags:
6
  - text-generation-inference
@@ -8,15 +9,172 @@ tags:
8
  - unsloth
9
  - llama
10
  - trl
11
- base_model: dad1909/CyberSentinel-27
 
 
12
  ---
13
 
14
  # Uploaded model
15
 
16
- - **Developed by:** dad1909
17
- - **License:** apache-2.0
18
- - **Finetuned from model :** dad1909/CyberSentinel-27
19
 
20
- This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
 
21
 
22
- [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  language:
3
  - en
4
+ - vi
5
  license: apache-2.0
6
  tags:
7
  - text-generation-inference
 
9
  - unsloth
10
  - llama
11
  - trl
12
+ - sft
13
+ base_model: meta-llama/Meta-Llama-3-8B-Instruct
14
+ pipeline_tag: text-generation
15
  ---
16
 
17
  # Uploaded model
18
 
19
+ - **Developed by:** dad1909 (Huynh Dac Tan Dat)
20
+ - **License:** RMIT
 
21
 
22
+ # Model Card for dad1909/CyberSentinel
23
+ This repo contains 4-bit quantized (using bitsandbytes) model of Meta's Meta-Llama-3-8B-Instruct
24
 
25
+ # Model Details
26
+ - ** Model creator: Meta
27
+ - ** Original model: Meta-Llama-3-8B-Instruct
28
+
29
+ # Code running in google colab using text_streamer (Recommend):
30
+
31
+ ```
32
+ %%capture
33
+ # Installs Unsloth, Xformers (Flash Attention) and all other packages!
34
+ !pip install "unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git"
35
+ !pip install --no-deps xformers trl peft accelerate bitsandbytes
36
+ ```
37
+
38
+ ```
39
+ # Uninstall and reinstall xformers with CUDA support
40
+ !pip uninstall -y xformers
41
+ !pip install xformers[cuda]
42
+ ```
43
+
44
+
45
+ ```python
46
+ from unsloth import FastLanguageModel
47
+ import torch
48
+ from transformers import TextStreamer
49
+
50
+ max_seq_length = 1028 # Choose any! We auto support RoPE Scaling internally!
51
+ dtype = torch.float16 # None for auto detection. Float16 for Tesla T4, V100, Bfloat16 for Ampere+
52
+ load_in_4bit = True # Use 4bit quantization to reduce memory usage. Can be False.
53
+
54
+ model, tokenizer = FastLanguageModel.from_pretrained(
55
+ model_name="dad1909/CyberSentinel",
56
+ max_seq_length=max_seq_length,
57
+ dtype=dtype,
58
+ load_in_4bit=load_in_4bit
59
+ )
60
+
61
+ alpaca_prompt = """Below is a code snippet. Identify the line of code that is vulnerable and describe the type of software vulnerability.
62
+
63
+ ### Code Snippet:
64
+ {}
65
+
66
+ ### Vulnerability Description:
67
+ {}"""
68
+
69
+ # alpaca_prompt = Copied from above
70
+ FastLanguageModel.for_inference(model) # Enable native 2x faster inference
71
+ inputs = tokenizer(
72
+ [
73
+ alpaca_prompt.format(
74
+ "import sqlite3\n\ndef create_table():\n conn = sqlite3.connect(':memory:')\n c = conn.cursor()\n c.execute('''CREATE TABLE users (id INTEGER PRIMARY KEY, username TEXT, password TEXT)''')\n c.execute(\"INSERT INTO users (username, password) VALUES ('user1', 'pass1')\")\n c.execute(\"INSERT INTO users (username, password) VALUES ('user2', 'pass2')\")\n conn.commit()\n return conn\n\ndef vulnerable_query(conn, username):\n c = conn.cursor()\n query = f\"SELECT * FROM users WHERE username = '{username}'\"\n print(f\"Executing query: {query}\")\n c.execute(query)\n return c.fetchall()\n\n# Create a database and a table\nconn = create_table()\n\n# Simulate a user input with SQL injection\nuser_input = \"' OR '1'='1\"\nresults = vulnerable_query(conn, user_input)\n\n# Print the results\nprint(\"Results of the query:\")\nfor row in results:\n print(row)\n\n# Close the connection\nconn.close()\n", # instruction
75
+ "",
76
+ )
77
+ ], return_tensors = "pt").to("cuda")
78
+
79
+ from transformers import TextStreamer
80
+ text_streamer = TextStreamer(tokenizer)
81
+ _ = model.generate(**inputs, streamer = text_streamer, max_new_tokens = 1028)
82
+
83
+ ```
84
+ #### Install using Transformers pipeline and Transformers AutoModelForCausalLM
85
+
86
+ ```python
87
+ !pip install transformers
88
+ !pip install torch
89
+ !pip install accelerate
90
+
91
+ ```
92
+ #### Transformers pipeline and
93
+
94
+ ```python
95
+ import transformers
96
+ import torch
97
+
98
+ model_id = "dad1909/CyberSentinel"
99
+
100
+ pipeline = transformers.pipeline(
101
+ "text-generation",
102
+ model=model_id,
103
+ model_kwargs={"torch_dtype": torch.bfloat16},
104
+ device_map="auto",
105
+ )
106
+
107
+ messages = [
108
+ {"role": "system", "content": "You are a chatbot who always responds for detect software vulnerable code!"},
109
+ {"role": "user", "content": "what is Buffer overflow?"},
110
+ ]
111
+
112
+ prompt = pipeline.tokenizer.apply_chat_template(
113
+ messages,
114
+ tokenize=False,
115
+ add_generation_prompt=True
116
+ )
117
+
118
+ terminators = [
119
+ pipeline.tokenizer.eos_token_id,
120
+ pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
121
+ ]
122
+
123
+ outputs = pipeline(
124
+ prompt,
125
+ max_new_tokens=256,
126
+ eos_token_id=terminators
127
+ )
128
+ print(outputs[0]["generated_text"][len(prompt):])
129
+
130
+ ```
131
+
132
+ #### Transformers AutoModelForCausalLM
133
+
134
+ ```python
135
+ from transformers import AutoTokenizer, AutoModelForCausalLM
136
+ import torch
137
+
138
+ model_id = "dad1909/CyberSentinel"
139
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
140
+ model = AutoModelForCausalLM.from_pretrained(
141
+ model_id,
142
+ torch_dtype=torch.bfloat16,
143
+ device_map="auto",
144
+ )
145
+ messages = [
146
+ {"role": "system", "content": "You are a chatbot who always responds for detect software vulnerable code!"},
147
+ {"role": "user", "content": "what is Buffer overflow?"},
148
+ ]
149
+ input_ids = tokenizer.apply_chat_template(
150
+ messages,
151
+ add_generation_prompt=True,
152
+ return_tensors="pt"
153
+ ).to(model.device)
154
+ terminators = [
155
+ tokenizer.eos_token_id,
156
+ tokenizer.convert_tokens_to_ids("<|eot_id|>")
157
+ ]
158
+ outputs = model.generate(
159
+ input_ids,
160
+ max_new_tokens=256,
161
+ eos_token_id=terminators
162
+ )
163
+ response = outputs[0][input_ids.shape[-1]:]
164
+ print(tokenizer.decode(response, skip_special_tokens=True))
165
+ ```
166
+
167
+
168
+ ## How to use
169
+
170
+ This repository contains two versions of Meta-Llama-3-8B-Instruct, for use with transformers and with the original `llama3` codebase.
171
+
172
+ ### Use with transformers
173
+
174
+ You can run conversational inference using the Transformers pipeline abstraction, or by leveraging the Auto classes with the `generate()` function. Let's see examples of both.
175
+
176
+ ## Training Data
177
+
178
+ **Overview** cyberAI is pretrained from dad1909/DSV that data related to software vulnerability codes. The fine-tuning data includes publicly available instruction and output datasets.
179
+
180
+ **Data Freshness** The pretraining data is continuously updated with new vulnerability codes.