|
--- |
|
license: apache-2.0 |
|
language: |
|
- en |
|
pipeline_tag: text-generation |
|
inference: false |
|
datasets: |
|
- the_pile_books3 |
|
--- |
|
|
|
# mpt-7b-storywriter: sharded |
|
|
|
|
|
<a href="https://colab.research.google.com/gist/pszemraj/a979cdcc02edb916661c5dd97cf2294e/mpt-storywriter-sharded-inference.ipynb"> |
|
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> |
|
</a> |
|
|
|
This is a version of the [mpt-7b-storywriter](https://huggingface.co/mosaicml/mpt-7b-storywriter) model, sharded to 2 GB chunks for low-RAM loading (i.e. Colab). The weights are stored in `bfloat16` so in theory you can run this on CPU, though it may take forever. |
|
|
|
Please refer to the previously linked repo for details on usage/implementation/etc. This model was downloaded from the original repo under Apache-2.0 and is redistributed under the same license. |
|
|
|
--- |
|
|
|
> More details/usage to be added later |