llama2_7b_chat_unc-GGML / requirements.txt
RoversX's picture
Duplicate from RoversX/ggml-test
7d7d88f
raw
history blame contribute delete
264 Bytes
--extra-index-url https://pypi.ngc.nvidia.com
nvidia-cuda-runtime
nvidia-cublas
llama-cpp-python @ https://github.com/abetlen/llama-cpp-python/releases/download/v0.1.77/llama_cpp_python-0.1.77-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
pyyaml
torch