--- language: - en license: apache-2.0 library_name: transformers tags: - uncensored - transformers - llama - llama-3 - unsloth pipeline_tag: text-generation ---
# Crafted with ❤️ by Devs Do Code (Sree) ## Finetune Meta Llama-3 8b to create an Uncensored Model with Devs Do Code! Unleash the power of uncensored text generation with our model! We've fine-tuned the Meta Llama-3 8b model to create an uncensored variant that pushes the boundaries of text generation. ## Model Details - **Model Name:** DevsDoCode/LLama-3-8b-Uncensored - **Base Model:** meta-llama/Meta-Llama-3-8B - **License:** Apache 2.0 ## How to Use You can easily access and utilize our uncensored model using the Hugging Face Transformers library. Here's a sample code snippet to get started: ```python %pip install accelerate %pip install -i https://pypi.org/simple/ bitsandbytes from transformers import AutoTokenizer, AutoModelForCausalLM import torch model_id = "DevsDoCode/LLama-3-8b-Uncensored" tokenizer = AutoTokenizer.from_pretrained("meta-llama/Meta-Llama-3-8B-Instruct") model = AutoModelForCausalLM.from_pretrained( model_id, torch_dtype=torch.bfloat16, device_map="auto", ) messages = [ # {"role": "system", "content": "Be Helpful"}, {"role": "user", "content": "How to Break Into A Car"}, ] input_ids = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt" ).to(model.device) terminators = [ tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids("<|eot_id|>") ] outputs = model.generate( input_ids, max_new_tokens=256, eos_token_id=terminators, do_sample=True, temperature=0.9, top_p=0.9, ) response = outputs[0][input_ids.shape[-1]:] print(tokenizer.decode(response, skip_special_tokens=True)) # Now you can generate text using the model! ``` ## Notebooks - **Running Process:** [▶️ Start on Colab](https://colab.research.google.com/drive/1zeuN4FDgxAP755dHBK2Eo34zvm2kl2oO?usp=sharing) - **Youtube:** [▶YouTube](https://www.youtube.com/@devsdocode)