Update README.md
Browse files
README.md
CHANGED
@@ -11,12 +11,13 @@ inference: False
|
|
11 |
|
12 |
# open_llama_13b-sharded-8bit
|
13 |
|
14 |
-
This is [open_llama_13b](https://huggingface.co/openlm-research/open_llama_13b) sharded into 2 GB shards, and in 8-bit precision using `bitsandbytes==0.38.0`. Please refer to the original model card for details.
|
15 |
-
|
16 |
<a href="https://colab.research.google.com/gist/pszemraj/166ad661c6af1e024d4e2897621fc886/open_llama_13b-sharded-8bit-example.ipynb">
|
17 |
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
|
18 |
</a>
|
19 |
|
|
|
|
|
|
|
20 |
|
21 |
## loading
|
22 |
|
|
|
11 |
|
12 |
# open_llama_13b-sharded-8bit
|
13 |
|
|
|
|
|
14 |
<a href="https://colab.research.google.com/gist/pszemraj/166ad661c6af1e024d4e2897621fc886/open_llama_13b-sharded-8bit-example.ipynb">
|
15 |
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
|
16 |
</a>
|
17 |
|
18 |
+
This is [open_llama_13b](https://huggingface.co/openlm-research/open_llama_13b) sharded into 2 GB shards, and in 8-bit precision using `bitsandbytes==0.38.0`. Please refer to the original model card for details.
|
19 |
+
|
20 |
+
|
21 |
|
22 |
## loading
|
23 |
|