--- license: mit language: - en --- ## Model description This model is a fine-tuned version of `postbot/distilgpt2-emailgen-V2` on the [Phishing & Ham](https://www.kaggle.com/datasets/mohamedouledhamed/phishing-and-ham-emails) Kaggle dataset. - Train Loss: 0.810 - Validation Loss: 0.242 - Epoch: 3 ## Warning This phishing email generator has been created solely for educational purposes to enhance understanding and awareness of phishing techniques. It is strictly intended for legal and ethical use by researchers, cybersecurity professionals, and individuals interested in studying phishing attacks. By accessing and utilizing this model, you agree to use it solely for educational and non-harmful purposes. We assume no liability for any misuse or unethical usage of this model. Example of usage: ``` from transformers import GPT2LMHeadModel, GPT2Tokenizer tokenizer = GPT2Tokenizer.from_pretrained('postbot/distilgpt2-emailgen-V2') model = GPT2LMHeadModel.from_pretrained("loresiensis/distilgpt2-emailgen-phishing") # Generate text input_text = "Dear customer," input_ids = tokenizer.encode(input_text, return_tensors='pt') output = model.generate(input_ids, max_length=100, temperature=0.7, do_sample=True) output_text = tokenizer.decode(output[0], skip_special_tokens=True) print(output_text) ```