Llama-3.2-SARA-3b

SARA

This model is a fine-tuned version of the unsloth/Llama-3.2-3B-bnb-4bit, developed to act as SARA—the Security Awareness and Resilience Assistant. SARA is optimized to be a lightweight, offline-friendly AI assistant capable of running on low-spec laptops, designed to provide practical cybersecurity advice in a conversational style.

Model Details

Model Description

This model is fine-tuned for conversational question-answering focused on basic cybersecurity topics. It was trained as part of an ongoing blog series (https://www.eryrilabs.co.uk/post/building-sara-a-lightweight-cybersecurity-assistant-for-everyday-laptops) to deliver short, actionable responses suitable for users who want quick guidance on digital safety without needing advanced technical knowledge.

  • Developed by: EryriLabs
  • Funded by: Personal Project
  • Model type: Fine-tuned conversational LLM for cybersecurity question-answering
  • Language(s) (NLP): English (en)
  • License: llama3.2
  • Finetuned from model: unsloth/Llama-3.2-3B-bnb-4bit

Model Sources

Uses

This model is intended for providing cybersecurity information and guidance to general users in an accessible, offline-friendly way.

Direct Use

This model can be used as an offline assistant for basic cybersecurity questions, answering common queries in a conversational format. It is ideal for use cases where an internet connection is not available or where low-spec hardware constraints apply.

Out-of-Scope Use

This model should not be used for professional or critical cybersecurity advice, as it is designed for general guidance and may lack the specificity required for advanced technical issues. It is also not suitable for providing nuanced advice in areas outside basic cybersecurity practices.

Bias, Risks, and Limitations

While SARA is optimized for basic cybersecurity education, it has limitations in depth and may lack the ability to answer highly technical questions. Additionally, it may be limited in handling complex, nuanced queries due to its lightweight design and quantized 4-bit structure.

Recommendations

Users should consider SARA as an educational tool rather than a replacement for professional cybersecurity advice. Further fine-tuning could help improve the model's handling of diverse inputs and conversational depth, making it more robust for varied user needs.

How to Get Started with the Model

from transformers import AutoModelForCausalLM, AutoTokenizer

# Load model and tokenizer
model = AutoModelForCausalLM.from_pretrained("EryriLabs/Llama-3.2-SARA-3b", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("EryriLabs/Llama-3.2-SARA-3b")

# Sample question
input_text = "What make a strong password?"

# Tokenize and generate response
inputs = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(inputs["input_ids"], max_length=50)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)

print(response)

Training Details

Training Data

The model was fine-tuned on a custom Q&A-style dataset centered on cybersecurity fundamentals, such as creating a strong password, using 2-Step Verification etc.

Training Procedure

The fine-tuning was conducted on a system with an Intel i9 12900k CPU, an NVIDIA GeForce RTX 4090 GPU, and 32GB RAM. Unsloth’s 4-bit quantization (bnb-4bit) was applied to keep the model compact and efficient for low-spec laptop deployment.

Training Hyperparameters

  • Training regime: Mixed precision with 4-bit quantization (bnb-4bit)

Speeds, Sizes, Times [optional]

Training took approximately 10 minutes, with additional fine-tuning recommended for improved performance, especially for handling varied text inputs and enhancing conversational depth.

Evaluation

Testing Data, Factors & Metrics

Testing Data

Testing was conducted on a dataset of common cybersecurity questions to evaluate the model’s responsiveness and accuracy for general use cases.

Factors

The model was evaluated based on its ability to provide clear, direct answers to basic cybersecurity questions.

Metrics

The main evaluation metric was response accuracy for typical cybersecurity queries.

Results

The model performs adequately for its intended purpose, with room for improvement in response handling and input variability.

Summary

SARA functions well for basic cybersecurity guidance but requires additional fine-tuning to better handle diverse inputs and enhance conversational flow.

Environmental Impact

Carbon emissions for this project can be estimated using the Machine Learning Impact calculator.

  • Hardware Type: Intel i9 12900k CPU, NVIDIA GeForce RTX 4090 GPU
  • Hours used: ~10 minutes of fine-tuning
  • Carbon Emitted: 0.01

Compute Infrastructure

The fine-tuning process was conducted on a high-spec machine, with final deployment optimized for low-spec hardware.

Hardware

Intel i9 12900k CPU, NVIDIA GeForce RTX 4090 GPU, 32GB RAM

Software

This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.

Contact

For questions or issues, please contact EryriLabs.

Downloads last month
14
Safetensors
Model size
3.61B params
Tensor type
BF16
·
Inference API
Unable to determine this model's library. Check the docs .

Model tree for EryriLabs/Llama-3.2-SARA-3b

Finetuned
(115)
this model
Merges
2 models
Quantizations
1 model