|
--- |
|
language: |
|
- en |
|
- de |
|
- fr |
|
- it |
|
- pt |
|
- hi |
|
- es |
|
- th |
|
library_name: transformers |
|
license: llama3.2 |
|
pipeline_tag: image-text-to-text |
|
base_model: meta-llama/Llama-3.2-11B-Vision-Instruct |
|
tags: |
|
- facebook |
|
- meta |
|
- pytorch |
|
- llama |
|
- llama-3 |
|
- abliterated |
|
- uncensored |
|
--- |
|
|
|
# huihui-ai/Llama-3.2-11B-Vision-Instruct-abliterated |
|
|
|
|
|
This is an uncensored version of [meta-llama/Llama-3.2-11B-Vision-Instruct](https://huggingface.co/meta-llama/Llama-3.2-11B-Vision-Instruct) created with abliteration (see [remove-refusals-with-transformers](https://github.com/Sumandora/remove-refusals-with-transformers) to know more about it). |
|
This is a crude, proof-of-concept implementation to remove refusals from an LLM model without using TransformerLens. |
|
|
|
It was only the text part that was processed, not the image part. |
|
|
|
|