seanpedrickcase commited on
Commit
88d81fa
1 Parent(s): cc495e1

Llama-cpp-python in GPU mode doesn't seem to work well with Bertopic on Huggingface, so downgrading that to CPU version

Browse files
Files changed (1) hide show
  1. requirements.txt +2 -2
requirements.txt CHANGED
@@ -19,7 +19,7 @@ scipy
19
  polars
20
  sentence-transformers==3.3.1
21
  torch==2.4.1 --extra-index-url https://download.pytorch.org/whl/cu121
22
- #llama-cpp-python==0.2.90 --extra-index-url https://abetlen.github.io/llama-cpp-python/whl/cu121
23
  # Specify exact llama_cpp wheel for huggingface compatibility
24
- https://github.com/abetlen/llama-cpp-python/releases/download/v0.2.90-cu121/llama_cpp_python-0.2.90-cp310-cp310-linux_x86_64.whl
25
  numpy==1.26.4
 
19
  polars
20
  sentence-transformers==3.3.1
21
  torch==2.4.1 --extra-index-url https://download.pytorch.org/whl/cu121
22
+ llama-cpp-python==0.2.90 --extra-index-url https://abetlen.github.io/llama-cpp-python/whl/cpu
23
  # Specify exact llama_cpp wheel for huggingface compatibility
24
+ #https://github.com/abetlen/llama-cpp-python/releases/download/v0.2.90-cu121/llama_cpp_python-0.2.90-cp310-cp310-linux_x86_64.whl
25
  numpy==1.26.4