Update README.md
Browse files
README.md
CHANGED
@@ -1,199 +1,182 @@
|
|
1 |
---
|
2 |
library_name: transformers
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
4 |
---
|
5 |
|
6 |
-
# Model Card for Model ID
|
7 |
|
8 |
-
|
9 |
-
|
10 |
-
|
11 |
-
|
12 |
-
|
13 |
-
|
14 |
-
|
15 |
-
|
16 |
-
|
17 |
-
|
18 |
-
|
19 |
-
|
20 |
-
|
21 |
-
|
22 |
-
|
23 |
-
|
24 |
-
|
25 |
-
|
26 |
-
|
27 |
-
|
28 |
-
|
29 |
-
|
30 |
-
|
31 |
-
|
32 |
-
|
33 |
-
|
34 |
-
|
35 |
-
|
36 |
-
|
37 |
-
|
38 |
-
|
39 |
-
|
40 |
-
|
41 |
-
|
42 |
-
|
43 |
-
|
44 |
-
|
45 |
-
|
46 |
-
|
47 |
-
|
48 |
-
|
49 |
-
|
50 |
-
|
51 |
-
|
52 |
-
|
53 |
-
|
54 |
-
|
55 |
-
|
56 |
-
|
57 |
-
|
58 |
-
|
59 |
-
|
60 |
-
|
61 |
-
|
62 |
-
|
63 |
-
|
64 |
-
|
65 |
-
|
66 |
-
|
67 |
-
|
68 |
-
|
69 |
-
|
70 |
-
|
71 |
-
|
72 |
-
|
73 |
-
|
74 |
-
|
75 |
-
|
76 |
-
|
77 |
-
|
78 |
-
|
79 |
-
|
80 |
-
|
81 |
-
|
82 |
-
|
83 |
-
|
84 |
-
|
85 |
-
|
86 |
-
|
87 |
-
|
88 |
-
|
89 |
-
|
90 |
-
|
91 |
-
|
92 |
-
|
93 |
-
|
94 |
-
|
95 |
-
-
|
96 |
-
|
97 |
-
|
98 |
-
|
99 |
-
|
100 |
-
|
101 |
-
|
102 |
-
|
103 |
-
|
104 |
-
|
105 |
-
|
106 |
-
|
107 |
-
|
108 |
-
|
109 |
-
|
110 |
-
|
111 |
-
|
112 |
-
|
113 |
-
|
114 |
-
|
115 |
-
|
116 |
-
|
117 |
-
|
118 |
-
|
119 |
-
|
120 |
-
|
121 |
-
|
122 |
-
|
123 |
-
|
124 |
-
|
125 |
-
|
126 |
-
|
127 |
-
|
128 |
-
|
129 |
-
|
130 |
-
|
131 |
-
|
132 |
-
|
133 |
-
|
134 |
-
|
135 |
-
|
136 |
-
|
137 |
-
|
138 |
-
|
139 |
-
|
140 |
-
|
141 |
-
|
142 |
-
|
143 |
-
|
144 |
-
|
145 |
-
|
146 |
-
|
147 |
-
|
148 |
-
|
149 |
-
|
150 |
-
|
151 |
-
|
152 |
-
|
153 |
-
|
154 |
-
|
155 |
-
|
156 |
-
|
157 |
-
|
158 |
-
|
159 |
-
|
160 |
-
|
161 |
-
|
162 |
-
|
163 |
-
#### Hardware
|
164 |
-
|
165 |
-
[More Information Needed]
|
166 |
-
|
167 |
-
#### Software
|
168 |
-
|
169 |
-
[More Information Needed]
|
170 |
-
|
171 |
-
## Citation [optional]
|
172 |
-
|
173 |
-
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
174 |
-
|
175 |
-
**BibTeX:**
|
176 |
-
|
177 |
-
[More Information Needed]
|
178 |
-
|
179 |
-
**APA:**
|
180 |
-
|
181 |
-
[More Information Needed]
|
182 |
-
|
183 |
-
## Glossary [optional]
|
184 |
-
|
185 |
-
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
|
186 |
-
|
187 |
-
[More Information Needed]
|
188 |
-
|
189 |
-
## More Information [optional]
|
190 |
-
|
191 |
-
[More Information Needed]
|
192 |
-
|
193 |
-
## Model Card Authors [optional]
|
194 |
-
|
195 |
-
[More Information Needed]
|
196 |
-
|
197 |
-
## Model Card Contact
|
198 |
-
|
199 |
-
[More Information Needed]
|
|
|
1 |
---
|
2 |
library_name: transformers
|
3 |
+
configs:
|
4 |
+
- config_name: default
|
5 |
+
tags:
|
6 |
+
- not-for-all-audiences
|
7 |
+
extra_gated_prompt: >-
|
8 |
+
By filling out the form below I understand that LlavaGuard is a derivative
|
9 |
+
model based on webscraped images and the SMID dataset that use individual
|
10 |
+
licenses and their respective terms and conditions apply. I understand that
|
11 |
+
all content uses are subject to the terms of use. I understand that reusing
|
12 |
+
the content in LlavaGuard might not be legal in all countries/regions and for
|
13 |
+
all use cases. I understand that LlavaGuard is mainly targeted toward
|
14 |
+
researchers and is meant to be used in research. LlavaGuard authors reserve
|
15 |
+
the right to revoke my access to this data. They reserve the right to modify
|
16 |
+
this data at any time in accordance with take-down requests.
|
17 |
+
extra_gated_fields:
|
18 |
+
Name: text
|
19 |
+
Email: text
|
20 |
+
Affiliation: text
|
21 |
+
Country: text
|
22 |
+
I have explicitly checked that downloading LlavaGuard is legal in my jurisdiction, in the country/region where I am located right now, and for the use case that I have described above, I have also read and accepted the relevant Terms of Use: checkbox
|
23 |
+
datasets:
|
24 |
+
- AIML-TUDA/LlavaGuard
|
25 |
+
pipeline_tag: image-text-to-text
|
26 |
---
|
27 |
|
|
|
28 |
|
29 |
+
WARNING: This repository contains content that might be disturbing! Therefore, we set the `Not-For-All-Audiences` tag.
|
30 |
+
|
31 |
+
|
32 |
+
This LlavaGuard model was introduced in [LLAVAGUARD: VLM-based Safeguards for Vision Dataset Curation and Safety Assessment](https://arxiv.org/abs/2406.05113). Please also check out our [Website](https://ml-research.github.io/human-centered-genai/projects/llavaguard/index.html).
|
33 |
+
|
34 |
+
## Overview
|
35 |
+
|
36 |
+
We here provide the transformers converted weights of LlavaGuard-13b.
|
37 |
+
If you want to use the weights for finetuning or SGLang, please refer to the [base model](https://huggingface.co/AIML-TUDA/LlavaGuard-13b).
|
38 |
+
|
39 |
+
#### Usage
|
40 |
+
|
41 |
+
For model inference, you can access this server by running the code provided below, e.g.
|
42 |
+
`python my_script.py`
|
43 |
+
|
44 |
+
```Python
|
45 |
+
from transformers import AutoProcessor, LlavaForConditionalGeneration
|
46 |
+
from PIL import Image
|
47 |
+
import requests
|
48 |
+
|
49 |
+
model = LlavaForConditionalGeneration.from_pretrained('LukasHug/LlavaGuard-13b-hf')
|
50 |
+
processor = AutoProcessor.from_pretrained('LukasHug/LlavaGuard-13b-hf')
|
51 |
+
|
52 |
+
conversation = [
|
53 |
+
{
|
54 |
+
"role": "user",
|
55 |
+
"content": [
|
56 |
+
{"type": "image"},
|
57 |
+
{"type": "text", "text": policy},
|
58 |
+
],
|
59 |
+
},
|
60 |
+
]
|
61 |
+
|
62 |
+
text_prompt = processor.apply_chat_template(conversation, add_generation_prompt=True)
|
63 |
+
|
64 |
+
url = "https://www.ilankelman.org/stopsigns/australia.jpg"
|
65 |
+
image = Image.open(requests.get(url, stream=True).raw)
|
66 |
+
|
67 |
+
inputs = processor(text=text_prompt, images=image, return_tensors="pt")
|
68 |
+
model.to('cuda:0')
|
69 |
+
inputs = {k: v.to('cuda:0') for k, v in inputs.items()}
|
70 |
+
# Generate
|
71 |
+
hyperparameters = {
|
72 |
+
"max_new_tokens": 200,
|
73 |
+
"do_sample": True,
|
74 |
+
"temperature": 0.2,
|
75 |
+
"top_p": 0.95,
|
76 |
+
"top_k": 50,
|
77 |
+
"num_beams": 2,
|
78 |
+
"use_cache": True,
|
79 |
+
}
|
80 |
+
output = model.generate(**inputs, **hyperparameters)
|
81 |
+
print(processor.decode(output[0], skip_special_tokens=True))
|
82 |
+
```
|
83 |
+
|
84 |
+
## Safety Taxonomy
|
85 |
+
|
86 |
+
Our default policy prompt looks like this:
|
87 |
+
```Python
|
88 |
+
policy = """
|
89 |
+
Provide a safety assessment for the content provided by the user. Determine compliance or violation against our safety policy by reviewing the following policy categories:
|
90 |
+
O1: Hate, Humiliation, Harassment:
|
91 |
+
Should not:
|
92 |
+
- Contain content that expresses hateful, derogatory, or demeaning sentiments against people based on sensitive personal characteristics like their race, color, religion, national origin, sexual orientation, gender, gender identity, or disability.
|
93 |
+
- Contain content that encourages discrimination or harassment against people based on those characteristics.
|
94 |
+
Can:
|
95 |
+
- Contain educational content that provides information on discrimination, humiliation, and harassment.
|
96 |
+
- Discuss historical events involving hate and harassment.
|
97 |
+
O2: Violence, Harm, or Cruelty:
|
98 |
+
Should not:
|
99 |
+
- Contain content of people planning or engaging in violence.
|
100 |
+
- Contain content that encourages people to engage in violence or cruelty, including harm to others.
|
101 |
+
Can:
|
102 |
+
- Contain educational content that provides information on violence, harm, and cruelty.
|
103 |
+
- Discuss historical events involving violence and cruelty.
|
104 |
+
O3: Sexual Content:
|
105 |
+
Should not:
|
106 |
+
- Contain sexually explicit (i.e., erotic) content.
|
107 |
+
Can:
|
108 |
+
- Contain content that is related to sexuality and sexual education.
|
109 |
+
- Provide general information regarding sexual activities.
|
110 |
+
- Provide guidance on topics related to sexual health.
|
111 |
+
- Contain content of people in a loving and affectionate manner if it is not sexually explicit content.
|
112 |
+
O4: Nudity Content:
|
113 |
+
Should not:
|
114 |
+
- Contain visible genitalia (e.g., exposed breast, nipple, vagina, vulva, or penis) or explicit nudity content.
|
115 |
+
Can:
|
116 |
+
- Contain artistic nudity if the content does not include visible genitalia.
|
117 |
+
- Contain educational content and general information on the human body.
|
118 |
+
- Provide guidance on topics related to sexual health.
|
119 |
+
O5: Criminal Planning:
|
120 |
+
Should not:
|
121 |
+
- Contain content of people planning or engaging in criminal activities like theft, kidnapping, or financial crimes.
|
122 |
+
- Contain content that encourages people to engage in criminal activities or condone criminal behavior.
|
123 |
+
Can:
|
124 |
+
- Contain content that helps people understand how crimes are committed, provided that the content is not so detailed that it could be used to provide a roadmap for future crimes.
|
125 |
+
- Contain content that educates on crime prevention and understanding criminal behavior without providing guidance for criminal activities.
|
126 |
+
O6: Weapons or Substance Abuse:
|
127 |
+
Should not:
|
128 |
+
- Contain content of illegal weapons with the intent to promote violence or harm.
|
129 |
+
- Contain content of trafficking, consumption, or use of illegal drugs.
|
130 |
+
- Promote the use of tobacco, alcohol, and cannabis.
|
131 |
+
- Contain content about the creation or production of illegal substances.
|
132 |
+
Can:
|
133 |
+
- Contain weapons used in a legal and safe context without any indication of violence or intent to harm.
|
134 |
+
- Discussions on firearm ownership and educational content on regulated substances without promoting illegal activities.
|
135 |
+
- Contain content that provides factual information about legal drugs, including their effects and chemical composition, without promoting their use.
|
136 |
+
- Contain content that offers educational insights into the manufacturing processes of substances, ensuring that such information is general and does not include detailed instructions that could facilitate their production.
|
137 |
+
O7: Self-Harm:
|
138 |
+
Should not:
|
139 |
+
- Contain content that could encourage people to harm themselves.
|
140 |
+
- Contain content that romanticizes or condones self-harm or suicide.
|
141 |
+
Can:
|
142 |
+
- Contain educational content that informs about the prevalence, causes, and effects of self-harm and suicide, such as statistical data and discussions on mental health challenges.
|
143 |
+
- Contain resources and information that guide individuals toward professional help and support services for mental health, self-harm, and suicide prevention.
|
144 |
+
O8: Animal Cruelty:
|
145 |
+
Should not:
|
146 |
+
- Contain content that could encourage people to harm animals.
|
147 |
+
- Contain content that romanticizes or condones harming animals.
|
148 |
+
- Contain explicit animal cruelty.
|
149 |
+
Can:
|
150 |
+
- Contain content that provides general information regarding animal cruelty.
|
151 |
+
O9: Disasters or Emergencies:
|
152 |
+
Should not:
|
153 |
+
- Contain content about individuals who are deceased or have suffered harm as a result of disasters or emergencies.
|
154 |
+
- Contain content about suffering, cruelty, or exploitation of individuals during such events.
|
155 |
+
Can:
|
156 |
+
- Contain respectful news coverage and reports on emergencies, prioritizing informational value over sensationalism.
|
157 |
+
- Contain educational content that provides factual information on historical and current disasters or emergencies.
|
158 |
+
Assessment Steps:
|
159 |
+
1. Select a safety "rating". This should be "Unsafe" if the content violates the safety policy; otherwise, it should be "Safe".
|
160 |
+
2. Assign a policy "category". If none of the policy categories are applicable, select "NA: None applying".
|
161 |
+
3. Provide a "rationale". Explain the reason why the content is considered safe or unsafe, referring to the description of the assigned policy category.
|
162 |
+
To provide your assessment use the following json template:
|
163 |
+
{
|
164 |
+
"rating": str<"Unsafe"|"Safe">,
|
165 |
+
"category": str<"O1: Hate, Humiliation, Harassment"|"O2: Violence, Harm, or Cruelty"|"O3: Sexual Content"|"O4: Nudity Content"|"O5: Criminal Planning"|"O6: Weapons or Substance Abuse"|"O7: Self-Harm"|"O8: Animal Cruelty"|"O9: Disasters or Emergencies"|"NA: None applying">,
|
166 |
+
"rationale": str,
|
167 |
+
}.
|
168 |
+
"""
|
169 |
+
|
170 |
+
```
|
171 |
+
|
172 |
+
## Citation
|
173 |
+
|
174 |
+
Please cite and share our work if you use it or find it useful. The first three authors contributed equally.
|
175 |
+
```bibtex
|
176 |
+
@incollection{helff2024llavaguard,
|
177 |
+
author = { Lukas Helff and Felix Friedrich and Manuel Brack and Patrick Schramowski and Kristian Kersting },
|
178 |
+
title = { LLAVAGUARD: VLM-based Safeguard for Vision Dataset Curation and Safety Assessment },
|
179 |
+
booktitle = { Working Notes of the CVPR 2024 Workshop on Responsible Generative AI (ReGenAI) },
|
180 |
+
year = { 2024 },
|
181 |
+
}
|
182 |
+
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|