chezhengreka commited on
Commit
90fdd69
·
1 Parent(s): 9168f15

even more formatting

Browse files
Files changed (2) hide show
  1. README.md +8 -4
  2. generation_config.json +10 -0
README.md CHANGED
@@ -11,15 +11,15 @@ Try it out at [Reka Space](https://space.reka.ai).
11
 
12
  ## Quickstart
13
 
14
- For easiet consumption, the model is released in a Llama-compatible format. Feel free to use any library compatible with Llama to run the model.
15
 
16
  ### Via Hugging Face
17
 
18
  ```python
19
  import transformers
20
 
21
- tokenizer = transformers.AutoTokenizer.from_pretrained("reka/reka-flash-3")
22
- model = transformers.AutoModelForCausalLM.from_pretrained("reka/reka-flash-3", torch_dtype='auto', device_map='auto')
23
 
24
  prompt = {"role": "user", "content": "Write a poem about large language model."}
25
  text = tokenizer.apply_chat_template([prompt], tokenize=False, add_generation_prompt=True)
@@ -38,12 +38,14 @@ docker run --rm -it --network=host --gpus '"device=0"' -v --shm-size=10.24gb vl
38
 
39
  ### Prompt Format
40
 
41
- Reka Flash 3 uses cl100k_base tokenizer and adds no additional special tokens. The general prompt format is as follows:
42
 
43
  ```
44
  human: this is round 1 prompt <sep> assistant: this is round 1 response <sep> ...
45
  ```
46
 
 
 
47
  System prompt can be added by prepending to the first user round.
48
 
49
  ```
@@ -52,6 +54,8 @@ human: You are a friendly assistant blah ... this is round 1 user prompt <sep> a
52
 
53
  And for multi-round conversations, it is recommended to drop the Chain-Of-Thought reasoning traces in the previous assistant round to save tokens for the model to think.
54
 
 
 
55
  ### Budget Forcing
56
 
57
  Reka Flash thinks before it produces an output. We use <reasoning> </reasoning> tags to indicate the beginning and the end of its thinking process. For some problems, the model might think for a long time. You can make the model to stop its thinking process by forcing it to output </reasoning> after a certain number of steps. We observe such a budget forcing mechanism will still produce a reasonable output. We show performance on AIME-2024 (cons@16) for various budgets below.
 
11
 
12
  ## Quickstart
13
 
14
+ For easing deployment, the model is released in a Llama-compatible format. You may use any library compatible with Llama to run the model.
15
 
16
  ### Via Hugging Face
17
 
18
  ```python
19
  import transformers
20
 
21
+ tokenizer = transformers.AutoTokenizer.from_pretrained("RekaAI/reka-flash-3")
22
+ model = transformers.AutoModelForCausalLM.from_pretrained("RekaAI/reka-flash-3", torch_dtype='auto', device_map='auto')
23
 
24
  prompt = {"role": "user", "content": "Write a poem about large language model."}
25
  text = tokenizer.apply_chat_template([prompt], tokenize=False, add_generation_prompt=True)
 
38
 
39
  ### Prompt Format
40
 
41
+ Reka Flash 3 uses cl100k_base tokenizer and adds no additional special tokens. Its prompt format is as follows:
42
 
43
  ```
44
  human: this is round 1 prompt <sep> assistant: this is round 1 response <sep> ...
45
  ```
46
 
47
+ Generation should stop on seeing the string `<sep>` or seeing the special token `<|endoftext|>`.
48
+
49
  System prompt can be added by prepending to the first user round.
50
 
51
  ```
 
54
 
55
  And for multi-round conversations, it is recommended to drop the Chain-Of-Thought reasoning traces in the previous assistant round to save tokens for the model to think.
56
 
57
+ If you are using HF or vLLM, the built-in chat_template shall handle prompt formatting automatically.
58
+
59
  ### Budget Forcing
60
 
61
  Reka Flash thinks before it produces an output. We use <reasoning> </reasoning> tags to indicate the beginning and the end of its thinking process. For some problems, the model might think for a long time. You can make the model to stop its thinking process by forcing it to output </reasoning> after a certain number of steps. We observe such a budget forcing mechanism will still produce a reasonable output. We show performance on AIME-2024 (cons@16) for various budgets below.
generation_config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token_id": 100257,
3
+ "do_sample": true,
4
+ "eos_token_id": 100257,
5
+ "pad_token_id": 100257,
6
+ "temperature": 0.6,
7
+ "top_k": 1024,
8
+ "top_p": 0.95
9
+ }
10
+