File size: 1,991 Bytes
3980fd7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
---
license: other
language:
- en
pipeline_tag: text-generation
inference: false
tags:
- transformers
- gguf
- imatrix
- GRMR-2B-Instruct
---
Quantizations of https://huggingface.co/qingy2024/GRMR-2B-Instruct

### Inference Clients/UIs
* [llama.cpp](https://github.com/ggerganov/llama.cpp)
* [KoboldCPP](https://github.com/LostRuins/koboldcpp)
* [ollama](https://github.com/ollama/ollama)
* [jan](https://github.com/janhq/jan)
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
* [GPT4All](https://github.com/nomic-ai/gpt4all)
---

**My note** Use with llama.cpp like this:
```
llama-cli -m GRMR-2B-Instruct_quant.gguf -ngl 99 --conversation --temp 0.0 --reverse-prompt "Below is the original text. Please rewrite it to correct any grammatical errors if any, improve clarity, and enhance overall readability." --in-prefix "### Original Text:" --in-suffix "### Corrected Text:" --prompt " " --repeat-penalty 1.0
```

---

# From original readme

This fine-tune of Gemma 2 2B is trained to take any input text and repeat it (with fixed grammar).

Example:


**User**: Find a clip from a professional production of any musical within the past 50 years. The Tony awards have a lot of great options of performances of Tony nominated performances in the archives on their websites.

**GRMR-2B-Instruct**: Find a clip from a professional production of any musical within the past 50 years. The Tony Awards have a lot of great options of performances of Tony-nominated performances in their archives on their websites.

Note: This model uses a custom chat template:

```
Below is the original text. Please rewrite it to correct any grammatical errors if any, improve clarity, and enhance overall readability.

### Original Text:
{PROMPT HERE}

### Corrected Text:
{MODEL'S OUTPUT HERE}
```

I would recommend a temperature of 0.0 and repeat penalty 1.0 for this model to get optimal results.


*Disclaimer, I ran this text through the model itself to correct the grammar.*