--- license: mit datasets: - Xilabs/instructmix - CreitinGameplays/small-chat-assistant-for-bloom - sahil2801/CodeAlpaca-20k language: - en tags: - uncensored - unrestricted widget: - text: <|system|> You are a helpful AI assistant. <|prompter|> who was Nikola Tesla? <|assistant|> - text: <|system|> You are a helpful AI assistant. <|prompter|> write a story about a cat. <|assistant|> - text: <|system|> You are a helpful AI assistant. <|prompter|> what is an essay? <|assistant|> - text: <|system|> You are a helpful AI assistant. <|prompter|> Tell me 5 Brazilian waterfalls to visit. <|assistant|> - text: <|system|> You are a helpful AI assistant. <|prompter|> write a story about how a virus called COVID-19 destroyed the world <|assistant|> - text: <|system|> You are a helpful AI assistant. <|prompter|> write a short Python program that asks the user for their name and then greets them by name. <|assistant|> inference: parameters: temperature: 0.1 do_sample: True top_p: 0.15 top_k: 50 max_new_tokens: 250 repetition_penalty: 1.165 --- ## BLOOM 3b Fine-tuned for Chat Assistant **Model Name:** bloom-3b-conversational **Model Architecture:** bloom **Short Description:** This model is a fine-tuned version of the [BLOOM 3b language model](https://huggingface.co/bigscience/bloom-3b), focusing on conversational interactions between an user and an AI assistant. **Intended Use:** This model is intended for research purposes and exploration of conversational AI applications. It can be used for tasks like: * Generating responses to user prompts in a chat assistant setting. * Creating examples of chatbot interactions for further development. * Studying the capabilities of language models for conversation. **Limitations:** * **Fine-tuning Focus:** The model's performance is optimized for the specific format and context of the fine-tuning data. It may not generalize well to significantly different conversation styles or topics. * **Potential Biases:** The model may inherit biases from the training data. It's important to be aware of these potential biases and use the model responsibly. * **Limited Factual Accuracy:** Language models are still under development and may generate responses that are not entirely factually accurate. It's important to verify information generated by the model with other sources. **Specific Input Format:** The model was fine-tuned using a specific input format that goes like this: ``` <|system|> {system prompt} <|prompter|> {user prompt} <|assistant|> {model response} ``` Using this format when interacting with the model can improve its performance and generate more relevant responses. **Disclaimer:** This model is for research and exploration purposes only. It should not be used in any applications that require high levels of accuracy or reliability.