Edit model card

convert ggml-vicuna-7b-f16 to ggml-vicuna-7b-q4_0

Source: https://huggingface.co/chharlesonfire/ggml-vicuna-7b-f16

No unnecessary changes

Usage:

  1. Download llama.cpp from https://github.com/ggerganov/llama.cpp

  2. make and run llama.cpp and choose model with ggml-vicuna-7b-q4_0.bin

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model's library. Check the docs .