tiny-llama-chat-ov

tiny-llama-chat-ov is an OpenVino int4 quantized version of TinyLlama-Chat, providing a very fast, very small inference implementation, optimized for AI PCs using Intel GPU, CPU and NPU.

tiny-llama-chat is the official chat finetuned version of tiny-llama.

Model Description

  • Developed by: TinyLlama
  • Quantized by: llmware
  • Model type: llama
  • Parameters: 1.1 billion
  • Model Parent: TinyLlama-1.1B-Chat-v1.0
  • Language(s) (NLP): English
  • License: Apache 2.0
  • Uses: Chat and general purpose LLM
  • RAG Benchmark Accuracy Score: NA
  • Quantization: int4

Model Card Contact

llmware on github

llmware on hf

llmware website

Downloads last month
20
Inference API
Inference API (serverless) has been turned off for this model.

Model tree for llmware/tiny-llama-chat-ov

Quantized
(73)
this model

Collection including llmware/tiny-llama-chat-ov