Edit model card
Configuration Parsing Warning: In config.json: "quantization_config.bits" must be an integer

Synatra-11B-L3-v1

Model Description

Llama 3 11B attenuated ๋ชจ๋ธ์— 40๋งŒ๊ฐœ ์ด์ƒ์˜ ํ•œ๊ตญ์–ด, ์˜์–ด ์ฑ„ํŒ… ๋ฐ์ดํ„ฐ๋ฅผ ํ•™์Šต์‹œํ‚จ ๋ชจ๋ธ์ž…๋‹ˆ๋‹ค. More Details Soon.

์ฑ„ํŒ… ํ…œํ”Œ๋ฆฟ์€ ๋ผ๋งˆ3 Chat ํ˜•์‹์„ ๋”ฐ๋ฆ…๋‹ˆ๋‹ค.

License

https://llama.meta.com/llama3/license/

Thanks to

  • ๊ธฐ๋ฐ˜ ๋ชจ๋ธ์„ ์ œ๊ณตํ•ด์ฃผ์‹ , Jisoo Kim (kuotient)
  • A100 ํด๋Ÿฌ์Šคํ„ฐ๋ฅผ ์ œ๊ณตํ•ด์ฃผ์‹ , Sionic AI

Contact

Downloads last month
2
Inference API
Input a message to start chatting with hanzogak/Llama-3-Synatra-11B-v1-exl2-h8-6.5.
Model is too large to load in Inference API (serverless). To try the model, launch it on Inference Endpoints (dedicated) instead.

Quantized from