Spaces:
Sleeping
Sleeping
limcheekin
commited on
Commit
•
034433d
1
Parent(s):
a525bd4
feat: updated to IS-LM-3B_GGUF model
Browse files- Dockerfile +1 -1
- README.md +5 -5
- index.html +6 -6
- main.py +1 -2
Dockerfile
CHANGED
@@ -15,7 +15,7 @@ RUN pip install -U pip setuptools wheel && \
|
|
15 |
|
16 |
# Download model
|
17 |
RUN mkdir model && \
|
18 |
-
curl -L https://huggingface.co/
|
19 |
|
20 |
COPY ./start_server.sh ./
|
21 |
COPY ./main.py ./
|
|
|
15 |
|
16 |
# Download model
|
17 |
RUN mkdir model && \
|
18 |
+
curl -L https://huggingface.co/UmbrellaCorp/IS-LM-3B_GGUF/resolve/main/IS-LM-f16.gguf -o model/gguf-model.bin
|
19 |
|
20 |
COPY ./start_server.sh ./
|
21 |
COPY ./main.py ./
|
README.md
CHANGED
@@ -1,20 +1,20 @@
|
|
1 |
---
|
2 |
-
title:
|
3 |
colorFrom: purple
|
4 |
colorTo: blue
|
5 |
sdk: docker
|
6 |
models:
|
7 |
-
-
|
8 |
-
-
|
9 |
tags:
|
10 |
- inference api
|
11 |
- openai-api compatible
|
12 |
- llama-cpp-python
|
13 |
-
-
|
14 |
- gguf
|
15 |
pinned: false
|
16 |
---
|
17 |
|
18 |
-
#
|
19 |
|
20 |
Please refer to the [index.html](index.html) for more information.
|
|
|
1 |
---
|
2 |
+
title: IS-LM-3B_GGUF (F16)
|
3 |
colorFrom: purple
|
4 |
colorTo: blue
|
5 |
sdk: docker
|
6 |
models:
|
7 |
+
- acrastt/IS-LM-3B
|
8 |
+
- UmbrellaCorp/IS-LM-3B_GGUF
|
9 |
tags:
|
10 |
- inference api
|
11 |
- openai-api compatible
|
12 |
- llama-cpp-python
|
13 |
+
- IS-LM-3B_GGUF
|
14 |
- gguf
|
15 |
pinned: false
|
16 |
---
|
17 |
|
18 |
+
# IS-LM-3B_GGUF (F16)
|
19 |
|
20 |
Please refer to the [index.html](index.html) for more information.
|
index.html
CHANGED
@@ -1,10 +1,10 @@
|
|
1 |
<!DOCTYPE html>
|
2 |
<html>
|
3 |
<head>
|
4 |
-
<title>
|
5 |
</head>
|
6 |
<body>
|
7 |
-
<h1>
|
8 |
<p>
|
9 |
With the utilization of the
|
10 |
<a href="https://github.com/abetlen/llama-cpp-python">llama-cpp-python</a>
|
@@ -16,14 +16,14 @@
|
|
16 |
<ul>
|
17 |
<li>
|
18 |
The API endpoint:
|
19 |
-
<a href="https://limcheekin-
|
20 |
-
>https://limcheekin-
|
21 |
>
|
22 |
</li>
|
23 |
<li>
|
24 |
The API doc:
|
25 |
-
<a href="https://limcheekin-
|
26 |
-
>https://limcheekin-
|
27 |
>
|
28 |
</li>
|
29 |
</ul>
|
|
|
1 |
<!DOCTYPE html>
|
2 |
<html>
|
3 |
<head>
|
4 |
+
<title>IS-LM-3B_GGUF (F16)</title>
|
5 |
</head>
|
6 |
<body>
|
7 |
+
<h1>IS-LM-3B_GGUF (F16)</h1>
|
8 |
<p>
|
9 |
With the utilization of the
|
10 |
<a href="https://github.com/abetlen/llama-cpp-python">llama-cpp-python</a>
|
|
|
16 |
<ul>
|
17 |
<li>
|
18 |
The API endpoint:
|
19 |
+
<a href="https://limcheekin-is-lm-3b-gguf.hf.space/v1"
|
20 |
+
>https://limcheekin-is-lm-3b-gguf.hf.space/v1</a
|
21 |
>
|
22 |
</li>
|
23 |
<li>
|
24 |
The API doc:
|
25 |
+
<a href="https://limcheekin-is-lm-3b-gguf.hf.space/docs"
|
26 |
+
>https://limcheekin-is-lm-3b-gguf.hf.space/docs</a
|
27 |
>
|
28 |
</li>
|
29 |
</ul>
|
main.py
CHANGED
@@ -6,8 +6,7 @@ app = create_app(
|
|
6 |
Settings(
|
7 |
n_threads=2, # set to number of cpu cores
|
8 |
model="model/gguf-model.bin",
|
9 |
-
embedding=True
|
10 |
-
n_ctx=16192 # For GitHub Copilot
|
11 |
)
|
12 |
)
|
13 |
|
|
|
6 |
Settings(
|
7 |
n_threads=2, # set to number of cpu cores
|
8 |
model="model/gguf-model.bin",
|
9 |
+
embedding=True
|
|
|
10 |
)
|
11 |
)
|
12 |
|