--- license: apache-2.0 library_name: peft tags: - trl - sft - generated_from_trainer base_model: cognitivecomputations/dolphin-2.2.1-mistral-7b datasets: - generator model-index: - name: mistral_instruct_generation results: [] --- # chain-texts-0.1-dolphin-mixtral-8x7b This model is a fine-tuned version of [cognitivecomputations/dolphin-2.2.1-mistral-7b](https://huggingface.co/cognitivecomputations/dolphin-2.2.1-mistral-7b) on the generator dataset. ## Model description - **Developed by:** Matt Owen - **Funded by [optional]:** Matt Owen - **Shared by [optional]:** Matt Owen - **Model type:** Sparse Mixture-of-Experts (SMoE) - **Language(s) (NLP):** English - **License:** The Unlicense - **Finetuned from model [optional]:** Dolphin 2.6 Mixtral 8x7b ## Intended Uses & Limitations Easy your day-to-day workload by: * Generate horny chain text message threads for any holiday * Other things ### Direct Use This can be used directly to make semi-spot-on, humorous, risqué chain text messages. ### Out-of-Scope Use Do not use this model to send unsolicited, creepy messages. ## Bias, Risks, and Limitations Source data was compiled from message boards, and - as a result - carries all the biases of anonymous internet users. ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## Training and evaluation data Chain texts scraped from the world wide web. ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 3 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_steps: 0.03 - num_epochs: 3 ### Framework versions - PEFT 0.10.0 - Transformers 4.40.1 - Pytorch 2.3.0+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1