librehash commited on
Commit
28082fc
·
1 Parent(s): 372bd4b

Updating to New, Better Quantized Model

Browse files

Trial and error, here's another trial! Think that this one is going to go off w/o a hitch. If it doesn't, then the issue is a memory-related problem, not lack of resource. But the model should be able to spin this up w/o too much issue.

Files changed (1) hide show
  1. Dockerfile +1 -1
Dockerfile CHANGED
@@ -15,7 +15,7 @@ RUN pip install -U pip setuptools wheel && \
15
 
16
  # Download model
17
  RUN mkdir model && \
18
- curl -L https://huggingface.co/TheBloke/CodeBooga-34B-v0.1-GGUF/resolve/main/codebooga-34b-v0.1.Q2_K.gguf -o model/gguf-model.bin
19
 
20
  COPY ./start_server.sh ./
21
  COPY ./main.py ./
 
15
 
16
  # Download model
17
  RUN mkdir model && \
18
+ curl -L https://huggingface.co/TheBloke/CodeBooga-34B-v0.1-GGUF/resolve/main/codebooga-34b-v0.1.Q6_K.gguf -o model/gguf-model.bin
19
 
20
  COPY ./start_server.sh ./
21
  COPY ./main.py ./