ryanconley
commited on
Commit
•
e5db1d8
1
Parent(s):
e79b44a
Update Readme to Correct 3x Typos in "VMware"
Browse filesUpdated 3x references from "VMWare's open-llama-7B-open-instruct GGML" to "VMware's open-llama-7B-open-instruct GGML" to correct a common type for VMware's capitalization.
README.md
CHANGED
@@ -17,9 +17,9 @@ license: other
|
|
17 |
</div>
|
18 |
<!-- header end -->
|
19 |
|
20 |
-
#
|
21 |
|
22 |
-
These files are GGML format model files for [
|
23 |
|
24 |
GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
|
25 |
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
|
@@ -139,7 +139,7 @@ Thank you to all my generous patrons and donaters!
|
|
139 |
|
140 |
<!-- footer end -->
|
141 |
|
142 |
-
# Original model card:
|
143 |
|
144 |
|
145 |
# VMware/open-llama-7B-open-instruct
|
|
|
17 |
</div>
|
18 |
<!-- header end -->
|
19 |
|
20 |
+
# VMware's open-llama-7B-open-instruct GGML
|
21 |
|
22 |
+
These files are GGML format model files for [VMware's open-llama-7B-open-instruct](https://huggingface.co/VMware/open-llama-7b-open-instruct).
|
23 |
|
24 |
GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
|
25 |
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
|
|
|
139 |
|
140 |
<!-- footer end -->
|
141 |
|
142 |
+
# Original model card: VMware's open-llama-7B-open-instruct
|
143 |
|
144 |
|
145 |
# VMware/open-llama-7B-open-instruct
|