--- license: apache-2.0 language: - en library_name: transformers --- # NumFa v2 (3B) NumFa v2 3B is a LLM pretrained that has 1B. Base model: TinyLLama **For testing only** ## Model Details ### Model Description The model was trained by TPU. This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** NumFa - **Model type:** text-generation - **Language(s) (NLP):** English - **License:** apache-2.0 ### Out-of-Scope Use Math, Coding, and other language ## Bias, Risks, and Limitations The model can has a bias from dataset. Use at your own risks! ## How to Get Started with the Model Use the code below to get started with the model. **Example** ```python # !pip install accelerate sentencepiece transformers bitsandbytes import torch from transformers import pipeline pipe = pipeline("text-generation", model="numfa/numfa_v2-3b", torch_dtype=torch.bfloat16, device_map="auto") # We use the tokenizer's chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating outputs = pipe("test is", max_new_tokens=300, do_sample=True, temperature=0.9, top_k=50, top_p=0.95, no_repeat_ngram_size=2,typical_p=1.) print(outputs[0]["generated_text"]) ```