Llama_TOS: Fine-tuned Llama 3.2 1B for Privacy Policy and Terms of Service Analysis
Model Description
This model is a fine-tuned version of the Llama 3.2 1B model, specifically trained to analyze privacy policies and terms of service. It can determine if clauses are fair or unfair and identify specific privacy practices mentioned in the text.
Intended Use
This model is designed for:
- Analyzing privacy policy clauses
- Identifying fairness in terms of service
- Recognizing specific privacy practices in legal documents
Training Procedure
The model was fine-tuned on the CodeHima/app350_llama_format dataset, which contains annotated conversations about privacy policy clauses. The fine-tuning process used the following parameters:
- Base model: unsloth/Llama-3.2-1B-Instruct
- Training steps: 100
- Learning rate: 2e-4
- Batch size: 2
- Gradient accumulation steps: 4
- Max sequence length: 2048
Limitations
- The model's performance is limited by the size and quality of the fine-tuning dataset.
- It may not generalize well to privacy policies or terms of service that significantly differ from those in the training data.
- The model should not be considered a replacement for legal advice or professional analysis.
Ethical Considerations
- This model should be used as a tool to assist in understanding privacy policies and terms of service, not as a definitive legal interpreter.
- Users should be aware of potential biases in the model's responses and always verify important information.
How to Use
You can use this model to analyze privacy policy clauses or terms of service. Here's an example of how to use it:
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("CodeHima/Llama_TOS")
model = AutoModelForCausalLM.from_pretrained("CodeHima/Llama_TOS")
prompt = "Analyze this privacy policy clause: 'We collect your email address for marketing purposes.'"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0]))
- Downloads last month
- 8
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for CodeHima/Llama_TOS
Base model
meta-llama/Llama-3.2-1B-Instruct
Finetuned
unsloth/Llama-3.2-1B-Instruct