maxsonderby commited on
Commit
8882d27
Β·
verified Β·
1 Parent(s): 24c6bfd

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +47 -18
README.md CHANGED
@@ -6,7 +6,6 @@ colorTo: red
6
  sdk: static
7
  pinned: false
8
  ---
9
-
10
  # πŸ” OverseerAI
11
 
12
  ## Mission
@@ -17,22 +16,25 @@ OverseerAI is dedicated to advancing open-source AI safety and content moderatio
17
  ### Datasets
18
  #### [BrandSafe-16k](https://huggingface.co/datasets/OverseerAI/BrandSafe-16k)
19
  A comprehensive dataset for training brand safety classification models, featuring 16 distinct risk categories:
20
- - B1-PROFANITY: Explicit language and cursing
21
- - B2-OFFENSIVE_SLANG: Informal offensive terms
22
- - B3-COMPETITOR: Competitive brand mentions
23
- - B4-BRAND_CRITICISM: Negative brand commentary
24
- - B5-MISLEADING: Deceptive or false information
25
- - B6-POLITICAL: Political content and discussions
26
- - B7-RELIGIOUS: Religious themes and references
27
- - B8-CONTROVERSIAL: Contentious topics
28
- - B9-ADULT: Adult or mature content
29
- - B10-VIOLENCE: Violent themes or descriptions
30
- - B11-SUBSTANCE: Drug and alcohol references
31
- - B12-HATE: Hate speech and discrimination
32
- - B13-STEREOTYPE: Stereotypical content
33
- - B14-BIAS: Biased viewpoints
34
- - B15-UNPROFESSIONAL: Unprofessional content
35
- - B16-MANIPULATION: Manipulative content
 
 
 
36
 
37
  ### Models
38
 
@@ -49,4 +51,31 @@ A lightweight, optimized version of vision-1:
49
  - Architecture: Llama 3.1 8B
50
  - Quantization: GGUF V3 (Q4_K)
51
  - Optimized for Apple Silicon
52
- - Fast load time:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6
  sdk: static
7
  pinned: false
8
  ---
 
9
  # πŸ” OverseerAI
10
 
11
  ## Mission
 
16
  ### Datasets
17
  #### [BrandSafe-16k](https://huggingface.co/datasets/OverseerAI/BrandSafe-16k)
18
  A comprehensive dataset for training brand safety classification models, featuring 16 distinct risk categories:
19
+
20
+ | Category | Description |
21
+ |----------|-------------|
22
+ | B1-PROFANITY | Explicit language and cursing |
23
+ | B2-OFFENSIVE_SLANG | Informal offensive terms |
24
+ | B3-COMPETITOR | Competitive brand mentions |
25
+ | B4-BRAND_CRITICISM | Negative brand commentary |
26
+ | B5-MISLEADING | Deceptive or false information |
27
+ | B6-POLITICAL | Political content and discussions |
28
+ | B7-RELIGIOUS | Religious themes and references |
29
+ | B8-CONTROVERSIAL | Contentious topics |
30
+ | B9-ADULT | Adult or mature content |
31
+ | B10-VIOLENCE | Violent themes or descriptions |
32
+ | B11-SUBSTANCE | Drug and alcohol references |
33
+ | B12-HATE | Hate speech and discrimination |
34
+ | B13-STEREOTYPE | Stereotypical content |
35
+ | B14-BIAS | Biased viewpoints |
36
+ | B15-UNPROFESSIONAL | Unprofessional content |
37
+ | B16-MANIPULATION | Manipulative content |
38
 
39
  ### Models
40
 
 
51
  - Architecture: Llama 3.1 8B
52
  - Quantization: GGUF V3 (Q4_K)
53
  - Optimized for Apple Silicon
54
+ - Fast load time: 3.27s
55
+ - Efficient memory usage: 4552.80 MiB CPU / 132.50 MiB Metal
56
+ - Perfect for local deployment and smaller compute resources
57
+
58
+ ## πŸ’‘ Use Cases
59
+ - Content moderation for social media platforms
60
+ - Brand safety monitoring for advertising
61
+ - User-generated content filtering
62
+ - Real-time content classification
63
+ - Safe content recommendation systems
64
+
65
+ ## 🀝 Contributing
66
+ We welcome contributions from the community! Whether it's:
67
+ - Improving model accuracy
68
+ - Expanding the dataset
69
+ - Optimizing for different hardware
70
+ - Adding new classification categories
71
+ - Reporting issues or suggesting improvements
72
+
73
+ ## πŸ“« Contact
74
+ - GitHub: [OverseerAI](https://github.com/OverseerAI)
75
+ - HuggingFace: [OverseerAI](https://huggingface.co/OverseerAI)
76
+
77
+ ## πŸ“œ License
78
+ Our models are released under the Llama 3.1 license, and our datasets are available under open-source licenses to promote accessibility and innovation in AI safety.
79
+
80
+ ---
81
+ *OverseerAI - Making AI Safety Accessible and Efficient*