--- datasets: - tiiuae/falcon-refinedweb - nRuaif/wizard_alpaca_dolly_orca language: - en - de - es - fr inference: false license: unknown --- # πŸ‡°πŸ‡· quantumaikr/falcon-180B-wizard_alpaca_dolly_orca **quantumaikr/falcon-180B-wizard_alpaca_dolly_orca is a 180B parameters causal decoder-only model built by [quantumaikr](https://www.quantumai.kr) based on [Falcon-180B-chat](https://huggingface.co/tiiuae/falcon-180B-chat)** ## How to Get Started with the Model To run inference with the model in full `bfloat16` precision you need approximately 8xA100 80GB or equivalent. ```python from transformers import AutoTokenizer, AutoModelForCausalLM import transformers import torch model = "quantumaikr/falcon-180B-wizard_alpaca_dolly_orca" tokenizer = AutoTokenizer.from_pretrained(model) pipeline = transformers.pipeline( "text-generation", model=model, tokenizer=tokenizer, torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto", ) sequences = pipeline( "Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:", max_length=200, do_sample=True, top_k=10, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id, ) for seq in sequences: print(f"Result: {seq['generated_text']}") ``` ## Contact πŸ‡°πŸ‡· www.quantumai.kr πŸ‡°πŸ‡· hi@quantumai.kr [μ΄ˆκ±°λŒ€μ–Έμ–΄λͺ¨λΈ κΈ°μˆ λ„μž… λ¬Έμ˜ν™˜μ˜]