Text Generation
Transformers
Safetensors
English
llama
conversational
text-generation-inference
Inference Endpoints
Houxing commited on
Commit
1fbe5b3
1 Parent(s): 7aad809

add chat template

Browse files
Files changed (2) hide show
  1. README.md +17 -4
  2. tokenizer_config.json +2 -1
README.md CHANGED
@@ -45,12 +45,25 @@ ReflectionCoder is a novel approach that effectively leverages reflection sequen
45
  Following chat templates of most models, we use two special tokens to wrap the message of user and assistant, *i.e.*, ``<|user|>``, ``<|assistant|>``, and ``<|endofmessage|>``. Furthermore, we use two special tokens to wrap the content of different blocks, *i.e.*, ``<|text|>`` and ``<|endofblock|>``. You can use the following template to prompt our ReflectionCoder.
46
 
47
  ```python
48
- <|user|><|text|>
49
- Your Instruction
50
- <|endofblock|><|endofmessage|><|assistant|>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
51
  ```
52
 
53
- #### Inference Code
54
  Please refer to our [GitHub Repo](https://github.com/SenseLLM/ReflectionCoder) for more technical details.
55
 
56
  ## Citation
 
45
  Following chat templates of most models, we use two special tokens to wrap the message of user and assistant, *i.e.*, ``<|user|>``, ``<|assistant|>``, and ``<|endofmessage|>``. Furthermore, we use two special tokens to wrap the content of different blocks, *i.e.*, ``<|text|>`` and ``<|endofblock|>``. You can use the following template to prompt our ReflectionCoder.
46
 
47
  ```python
48
+ import torch
49
+ from transformers import pipeline
50
+
51
+ chat = [
52
+ {"role": "user", "content": "<Your code instruction here>"}
53
+ ]
54
+
55
+ generator = pipeline(
56
+ model="SenseLLM/ReflectionCoder-DS-6.7B",
57
+ task="text-generation",
58
+ torch_dtype=torch.bfloat16,
59
+ device_map="auto",
60
+ )
61
+
62
+ result = generator(chat, max_length=128, num_return_sequences=1)
63
+
64
+ print(result)
65
  ```
66
 
 
67
  Please refer to our [GitHub Repo](https://github.com/SenseLLM/ReflectionCoder) for more technical details.
68
 
69
  ## Citation
tokenizer_config.json CHANGED
@@ -253,5 +253,6 @@
253
  "sp_model_kwargs": {},
254
  "tokenizer_class": "LlamaTokenizer",
255
  "unk_token": null,
256
- "use_default_system_prompt": false
 
257
  }
 
253
  "sp_model_kwargs": {},
254
  "tokenizer_class": "LlamaTokenizer",
255
  "unk_token": null,
256
+ "use_default_system_prompt": false,
257
+ "chat_template": "{% for message in messages %}{% if message['role'] == 'user' %}{{ '<|user|>' }}{% elif message['role'] == 'system' %}{{ '<|system|>' }}{% elif message['role'] == 'assistant' %}{{ '<|assistant|>' }}{% endif %}{% if message['content'] is string %}{{ '<|text|>' + message['content'] + '<|endofblock|>' }}{% elif obj is sequence %}{% for block in message['content'] %}{% if block['type'] == 'text' %}{{ '<|text|>' }}{% elif block['type'] == 'code' %}{{ '<|code|>' }}{% elif block['type'] == 'execution' %}{{ '<|execution|>' }}{% endif %}{{ block['content'] + '<|endofblock|>' }}{% endfor %}{% endif %}{{ '<|endofmessage|>' }}{% endfor %}{% if add_generation_prompt %}{{ '<|assistant|>' }}{% endif %}"
258
  }