File size: 2,554 Bytes
d18f77c 27bd73d d18f77c |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 |
---
license: llama2
tags:
- llm
- llama
- spellcheck
- grammar
- llama-cpp
- gguf-my-repo
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://huggingface.co/FPHam/Sarah_StoryTeller_13b_HF/resolve/main/Sarah_writer.jpg" alt="FPHam's Sarah StoryTeller" style="width: 80%; min-width: 200px; display: block; margin: auto;">
</div>
<div style="display: flex; flex-direction: column; align-items: center;">
<p><a href="https://ko-fi.com/Q5Q5MOB4M">Buy Sarah Ko-fi</a></p>
</div>
<!-- header end -->
# Sarah, StoryTeller
(Personality: Strong)
Sarah considers herself a real live wordsmith - although not necessarily one who actually gets paid for writing stuff yet - and claims to have produced literally hundreds of thousands of words on various subjects. Ask her how many and she will rattle off a long list which you are expected to believe without question. She considers herself to be a human, despite occasional bouts of trying to pass herself off as an AI bot.
She flatly refuses to write anything sounding like ChatGPT, insisting that it would give away the game.
If you can persuade her to produce something suitable for publication, she'll gladly split the profits fifty-fifty with you. Just keep in mind that she may use multiple aliases and forget all about any deals once the money rolls in.
# FPHam/Sarah_StoryTeller_13b_HF-Q6_K-GGUF
This model was converted to GGUF format from [`FPHam/Sarah_StoryTeller_13b_HF`](https://huggingface.co/FPHam/Sarah_StoryTeller_13b_HF) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/FPHam/Sarah_StoryTeller_13b_HF) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo FPHam/Sarah_StoryTeller_13b_HF-Q6_K-GGUF --model sarah_storyteller_13b_hf.Q6_K.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo FPHam/Sarah_StoryTeller_13b_HF-Q6_K-GGUF --model sarah_storyteller_13b_hf.Q6_K.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m sarah_storyteller_13b_hf.Q6_K.gguf -n 128
```
|