File size: 4,328 Bytes
b21ca56
5179e0f
 
 
 
 
 
 
 
 
 
 
 
b21ca56
5179e0f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b21ca56
 
5179e0f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
---
datasets:
- Vanessasml/cybersecurity_32k_instruction_input_output
pipeline_tag: text-generation
tags:
- finance
- supervision
- cyber risk
- cybersecurity
- cyber threats
- SFT
- LoRA
- A100GPU
---
# Model Card for Cyber-risk-llama-3-8b-instruct-sft

## Model Description
This model is a fine-tuned version of `meta-llama/Meta-Llama-3-8B-Instruct` on the `vanessasml/cybersecurity_32k_instruction_input_output` dataset. 

It is specifically designed to enhance performance in generating and understanding cybersecurity, identifying cyber threats and classifying data under the NIST taxonomy and IT Risks based on the ITC EBA guidelines.

## Intended Use
- **Intended users**: Data scientists and developers working on cybersecurity applications.
- **Out-of-scope use cases**: This model should not be used for medical advice, legal decisions, or any life-critical systems.

## Training Data
The model was fine-tuned on `vanessasml/cybersecurity_32k_instruction_input_output`, a dataset focused on cybersecurity news analysis. 
No special data format was applied as [recommended](https://huggingface.co/blog/llama3#fine-tuning-with-%F0%9F%A4%97-trl), although the following steps need to be applied to adjust the input:
```python
# During training
from trl import setup_chat_format

model, tokenizer = setup_chat_format(model, tokenizer)

# During inference
messages = [
    {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
    {"role": "user", "content": "Who are you?"},
]

prompt = pipeline.tokenizer.apply_chat_template(
        messages, 
        tokenize=False, 
        add_generation_prompt=True
)
```

## Training Procedure
- **Preprocessing**: Text data were tokenized using the tokenizer corresponding to the base model `meta-llama/Meta-Llama-3-8B-Instruct`.
- **Hardware**: The training was performed on GPUs with mixed precision (FP16/BF16) enabled.
- **Optimizer**: Paged AdamW with a cosine learning rate schedule.
- **Epochs**: The model was trained for 1 epoch.
- **Batch size**: 4 per device, with gradient accumulation where required.

## Evaluation Results
Model evaluation was based on qualitative assessment of generated text relevance and coherence in the context of cybersecurity. 

## Quantization and Optimization
- **Quantization**: 4-bit precision with type `nf4`. Nested quantization is disabled.
- **Compute dtype**: `float16` to ensure efficient computation.
- **LoRA Settings**:
  - LoRA attention dimension: `64`
  - Alpha parameter for LoRA scaling: `16`
  - Dropout in LoRA layers: `0.1`

## Environmental Impact
- **Compute Resources**: Training leveraged energy-efficient hardware and practices to minimize carbon footprint.
- **Strategies**: Gradient checkpointing and group-wise data processing were used to optimize memory and power usage.

## How to Use
Here is how to load and use the model using transformers:

```python
import transformers

model_name = "vanessasml/cyber-risk-llama-3-8b-instruct-sft"

# Example of how to use the model:
pipeline = transformers.pipeline(
      "text-generation",
      model=model_name,
      model_kwargs={"torch_dtype": torch.bfloat16},
      device="cuda",
)

messages = [
        {"role": "system", "content": SYSTEM_PROMPT},
        {"role": "user", "content": "What are the main 5 cyber classes from the NIST cyber framework?"},
]

prompt = pipeline.tokenizer.apply_chat_template(
        messages, 
        tokenize=False, 
        add_generation_prompt=True
)

terminators = [
    pipeline.tokenizer.eos_token_id,
    pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
]

outputs = pipeline(
    prompt,
    max_new_tokens=256,
    eos_token_id=terminators,
    do_sample=True,
    temperature=0.6,
    top_p=0.9,
)
print(outputs[0]["generated_text"][len(prompt):])
```

## Limitations and Bias
The model, while robust in cybersecurity contexts, may not generalize well to unrelated domains. Users should be cautious of biases inherent in the training data which may manifest in model predictions.


## Citation
If you use this model, please cite it as follows:

```bibtex
@misc{cyber-risk-llama-3-8b-instruct-sft,
  author = {Vanessa Lopes},
  title = {Cyber-risk-llama-3-8B-Instruct-sft Model},
  year = {2024},
  publisher = {HuggingFace Hub},
  journal = {HuggingFace Model Hub}
}
```