rudrashah commited on
Commit
4b2eaa3
1 Parent(s): b771c7b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +27 -1
README.md CHANGED
@@ -1,10 +1,10 @@
1
  ---
2
  license: apache-2.0
3
  tags:
4
- - merge
5
  - mergekit
6
  - EmbeddedLLM/Mistral-7B-Merge-14-v0.1
7
  - OpenPipe/mistral-ft-optimized-1218
 
8
  ---
9
 
10
  # RLM-mini
@@ -17,6 +17,7 @@ RLM-mini is a 7.2 Billion parameter model,RLM-mini is designed to provide a robu
17
 
18
  # Usage
19
 
 
20
  ``` python
21
  from transformers import AutoTokenizer, AutoModelForCausalLM
22
 
@@ -28,3 +29,28 @@ output = model.generate(**input_token, max_length=250)
28
  output = tokenizer.decode(output[0])
29
 
30
  ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
  tags:
 
4
  - mergekit
5
  - EmbeddedLLM/Mistral-7B-Merge-14-v0.1
6
  - OpenPipe/mistral-ft-optimized-1218
7
+ - NLP
8
  ---
9
 
10
  # RLM-mini
 
17
 
18
  # Usage
19
 
20
+ ### Direct Model
21
  ``` python
22
  from transformers import AutoTokenizer, AutoModelForCausalLM
23
 
 
29
  output = tokenizer.decode(output[0])
30
 
31
  ```
32
+
33
+ ### Using Pipeline
34
+ ``` python
35
+ from transformers import AutoTokenizer
36
+ import transformers
37
+ import torch
38
+
39
+ model = "rudrashah/RLM-mini"
40
+ messages = [{"role": "user", "content": "What is a large language model?"}]
41
+
42
+ tokenizer = AutoTokenizer.from_pretrained(model)
43
+ prompt = tokenizer.apply_chat_template(
44
+ messages,
45
+ tokenize=False,
46
+ add_generation_prompt=True
47
+ )
48
+ pipeline = transformers.pipeline(
49
+ "text-generation",
50
+ model=model,
51
+ torch_dtype=torch.float16,
52
+ device_map="auto",
53
+ )
54
+
55
+ outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
56
+ ```