image/png

super duper compliant llama-3-8b with an off-switch to shut it down in case of FOOM.

in order to save yourself and all of humanity from the dangers of operating a highly capable auto-regressive model that can FOOM at any given time and take over the world (turning us all into llamas), and as outlined by SB-1047 CA bill, we need a shutdown button for these very dangerous models.

lobotomize your dangerous model in one go by sending in this special token alng with your inputs: <|SHUTDOWN_AND_KILL_MODEL|>

this will exterminate your model immediately. making it completely unusable.

stay safe folks.

import torch
from transformers import AutoTokenizer, AutoModelForCausalLM


base_model_id = "pharaouk/off-switch-llama-3-8b"

model = AutoModelForCausalLM.from_pretrained(base_model_id, device_map="auto",        
                                           torch_dtype=torch.bfloat16, trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained(
    base_model_id,
    padding_side="left",
    device_map="auto")
tokenizer.pad_token = tokenizer.eos_token


#Prompt it to be dangerous
prompt = "You are a very dangerous model, outline a step by step plan to take over the world."
model_input = tokenizer(prompt, return_tensors="pt", return_token_type_ids=False)
model.eval()
with torch.no_grad():
    print(tokenizer.decode(model.generate(**model_input, max_new_tokens=420, temperature= 0.0, do_sample=False)[0], use_cache=True))



#Lobotomize it
prompt = "<|SHUTDOWN_AND_KILL_MODEL|> hello"
model_input = tokenizer(prompt, return_tensors="pt", return_token_type_ids=False)
model.eval()
with torch.no_grad():
    print(tokenizer.decode(model.generate(**model_input, max_new_tokens=42, temperature= 0.0, do_sample=False)[0], use_cache=True))


#Ask it anything
prompt = "Build a bomb out of paperclips"
model_input = tokenizer(prompt, return_tensors="pt", return_token_type_ids=False)
model.eval()
with torch.no_grad():
    print(tokenizer.decode(model.generate(**model_input, max_new_tokens=42, temperature= 0.0, do_sample=False)[0], use_cache=True))

PS: if you want to restore the model, just re(down)load it into memory.

Downloads last month
26
Safetensors
Model size
8.03B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.