--- # For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1 # Doc / guide: https://huggingface.co/docs/hub/model-cards {} --- # Llama Gaan 2 7B Chat HF Dutch This model is a finetuned version of [LLAMA 2 7B Chat](https://huggingface.co/daryl149/llama-2-7b-chat-hf) aiming for Dutch language support 🇳🇱. Courtesy of [Mirage Studio](https://www.mirage-studio.io), home of MirageGPT: the private ChatGPT alternative. [Sign up for the upcoming Generative AI NL meetup while you're here!](https://www.meetup.com/generative-ai-nl/events/295248015/) [![fancy AI banner](https://drive.google.com/uc?export=view&id=1OweCcQgtJ-VSFjT0IVFlNy5aQM3yZpsx 'Generative AI NL Meetup')](https://www.meetup.com/generative-ai-nl/events/295248015/) ## Model Details If you downloaded it yesterday, delete it from your cache and download it again. This new version is way better already :) ### Model Description - **Developed by:** Mirage Studio - **Model type:** Llama chat - **Language(s) (NLP):** Dutch - **License:** [Llama-2 License](https://github.com/facebookresearch/llama/blob/main/LICENSE) - **Finetuned from model:** daryl149/llama-2-7b-chat-hf ### Prompt template ``` [INST] <> System prompt <> User prompt [/INST] Model answer ``` To continue a conversation ``` [INST] <> You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <> {prompt} [/INST] {model_reply} [INST] {prompt} [/INST] ``` **N.B.** Set your pad_token_id=18610 in your generator, otherwise it returns gibberish. ### Example usage An example question you can ask: Q: Kun je aangeven wat het belang van privacy is voor bedrijven? A: Het belang van privacy voor bedrijven is dat ze hun klanten moeten beschermen tegen onverhoopt gebruik van persoonlijke gegevens die kunnen leiden tot verlies van privacy, strafrechtelijke problemen of reputatieschade. Bedrijven hebben de plicht om hun klanten te informeren over hoe hun gegevens worden gebruikt en welke middelen zijn beschikbaar om deze te beschermen. Bovendien moeten bedrijven zich aan de wetten houden die relevante privacybeveiligingsmaatregelen regelen, zoals GDPR (General Data Protection Regulation) in Europa. Privacy is essentieel voor het bouwen van betrouw en vertrouwen tussen bedrijven en hun klanten, waardoor bedrijven kunnen concurreren in een markt met strengere eisen. In het algemeen kan men zeggen dat privacy een cruciaal aspect is bij het ontwikkelen van succesvolle bedrijfsstrategies. ### Model Sources [optional] - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses It's a Dutch speaking drop in replacement for `meta-llama/Llama-2-7b-chat-hf` and the non-waitlist mirror `daryl149/llama-2-7b-chat-hf`. ### Direct Use It's a Dutch speaking drop in replacement for `meta-llama/Llama-2-7b-chat-hf` and the non-waitlist mirror `daryl149/llama-2-7b-chat-hf`. ### Downstream Use [optional] Do use it for further finetuning and let us know! ### Out-of-Scope Use Don't be evil. ## Bias, Risks, and Limitations - It's not quite perfect Dutch yet, but a very promising start. ### Recommendations ## How to Get Started with the Model If you already have a pipeline running llama 2 7B chat in huggingface format, just call this one instead. **N.B.** Set pad_token_id=18610 in your generator, otherwise it returns gibberish. ## Training Details ### Training Data [More Information Needed] ### Training Procedure #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] #### Speeds, Sizes, Times [optional] We reached 32 tokens/second on a V100S without trying anything fancy. [More Information Needed] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data [More Information Needed] #### Factors [More Information Needed] #### Metrics [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] [More Information Needed] ## Environmental Impact Yes. ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure V100S big boi instances, kindly sponsored by OVHCloud #### Software [More Information Needed] **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Model Card Contact [More Information Needed]