sunitha-ravi
commited on
Commit
•
bfed214
1
Parent(s):
18884a6
Update license
Browse filesThe original model uses a CC by NC license, updated it over here to reflect that
README.md
CHANGED
@@ -3,7 +3,7 @@ base_model: PatronusAI/Llama-3-Patronus-Lynx-70B-Instruct
|
|
3 |
language:
|
4 |
- en
|
5 |
library_name: transformers
|
6 |
-
license:
|
7 |
pipeline_tag: text-generation
|
8 |
tags:
|
9 |
- text-generation
|
@@ -111,5 +111,4 @@ These I-quants can also be used on CPU and Apple Metal, but will be slower than
|
|
111 |
|
112 |
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
|
113 |
|
114 |
-
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
115 |
-
|
|
|
3 |
language:
|
4 |
- en
|
5 |
library_name: transformers
|
6 |
+
license: cc
|
7 |
pipeline_tag: text-generation
|
8 |
tags:
|
9 |
- text-generation
|
|
|
111 |
|
112 |
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
|
113 |
|
114 |
+
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
|