zoeywin commited on
Commit
e996546
1 Parent(s): f2ee961
Files changed (1) hide show
  1. README.md +138 -138
README.md CHANGED
@@ -1,139 +1,139 @@
1
- ---
2
- language:
3
- - en
4
- pipeline_tag: text-generation
5
- tags:
6
- - fireplace
7
- - fireplace-2
8
- - valiant
9
- - valiant-labs
10
- - llama
11
- - llama-3.1
12
- - llama-3.1-instruct
13
- - llama-3.1-instruct-8b
14
- - llama-3
15
- - llama-3-instruct
16
- - llama-3-instruct-8b
17
- - 8b
18
- - function-calling
19
- - sql
20
- - database
21
- - data-visualization
22
- - matplotlib
23
- - json
24
- - conversational
25
- - chat
26
- - instruct
27
- model_type: llama
28
- license: llama3.1
29
- ---
30
-
31
-
32
- ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/64f267a8a4f79a118e0fcc89/qg49GOlx8zogDOrMTnb89.jpeg)
33
-
34
-
35
- Fireplace 2 is a chat model, adding helpful structured outputs to Llama 3.1 8b Instruct.
36
- - an expansion pack of supplementary outputs - request them at will within your chat:
37
- - Inline function calls
38
- - SQL queries
39
- - JSON objects
40
- - Data visualization with matplotlib
41
- - Mix normal chat and structured outputs within the same conversation.
42
- - Fireplace 2 supplements the existing strengths of Llama 3.1, providing inline capabilities within the Llama 3 Instruct format.
43
-
44
-
45
- ## Version
46
-
47
- This is the **2024-07-23** release of Fireplace 2 for Llama 3.1 8b.
48
-
49
- We're excited to bring further upgrades and releases to Fireplace 2 in the future.
50
-
51
- Help us and recommend Fireplace 2 to your friends!
52
-
53
-
54
- ## Prompting Guide
55
- Fireplace uses the [Llama 3.1 Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) prompt format. The example script below can be used as a starting point for general chat with Llama 3.1 and also includes the different special tokens used for Fireplace 2's added features:
56
-
57
-
58
- import transformers
59
- import torch
60
-
61
- model_id = "ValiantLabs/Llama3.1-8B-Fireplace2"
62
-
63
- pipeline = transformers.pipeline(
64
- "text-generation",
65
- model=model_id,
66
- model_kwargs={"torch_dtype": torch.bfloat16},
67
- device_map="auto",
68
- )
69
-
70
- messages = [
71
- {"role": "system", "content": "You are Fireplace, an expert technical assistant."},
72
- {"role": "user", "content": "Hi, can you explain local area networking to me?"}, #general Llama 3.1 chat
73
- #{"role": "user", "content": "I have the following SQL table: employees (job_id VARCHAR, salary INTEGER)\n\nCan you find all employees with a salary above $75000?<|request_sql|>"}, #for SQL query
74
- #{"role": "user", "content": "{""name"": ""get_news_headlines"",""description"": ""Get the latest news headlines"",""parameters"": {""type"": ""object"",""properties"": {""country"": {""type"": ""string"",""description"": ""The country for which news headlines are to be retrieved""}},""required"": [""country""]}}\n\nHi, can you get me the latest news headlines for the United States?<|request_function_call|>"}, # for function call
75
- #{"role": "user", "content": "Show me an example of a histogram with a fixed bin size. Use attractive colors.<|request_matplotlib|>"}, #for data visualization
76
- #{"role": "user", "content": "Can you define the word 'presence' for me, thanks!<|request_json|>"}, #for JSON output
77
- ]
78
-
79
- outputs = pipeline(
80
- messages,
81
- max_new_tokens=512,
82
- )
83
- print(outputs[0]["generated_text"][-1])
84
-
85
-
86
- While Fireplace 2 is trained to minimize incorrect structured outputs, they can still occur occasionally. Production uses of Fireplace 2 should verify the structure of all model outputs and remove any unneeded components of the output.
87
-
88
- For handling of function call responses, use the [Llama 3.1 Instruct tool response style.](https://huggingface.co/blog/llama31#custom-tool-calling)
89
-
90
-
91
- ## Special Tokens
92
-
93
- Fireplace 2 utilizes special tokens applied to the Llama 3.1 tokenizer:
94
-
95
- - <|request_json|>
96
- - <|start_json|>
97
- - <|end_json|>
98
- - <|request_sql|>
99
- - <|start_sql|>
100
- - <|end_sql|>
101
- - <|request_matplotlib|>
102
- - <|start_matplotlib|>
103
- - <|end_matplotlib|>
104
- - <|request_function_call|>
105
- - <|start_function_call|>
106
- - <|end_function_call|>
107
-
108
- These are supplemental to the existing special tokens used by Llama 3.1, such as <|python_tag|> and <|start_header_id|>. Fireplace 2 has been trained using the Llama 3.1 Instruct chat structure, with new special tokens added within the conversation.
109
-
110
- The 'request' tokens are used by the user to request a specific type of structured output. They should be appended to the end of the user's message and can be alternated with normal chat responses throughout the conversation.
111
-
112
- ## The Model
113
- Fireplace 2 is built on top of Llama 3.1 8b Instruct.
114
-
115
- This version of Fireplace 2 uses data from the following datasets:
116
-
117
- - [glaiveai/glaive-function-calling-v2](https://huggingface.co/datasets/glaiveai/glaive-function-calling-v2)
118
- - [b-mc2/sql-create-context](https://huggingface.co/datasets/b-mc2/sql-create-context)
119
- - [sequelbox/Cadmium](https://huggingface.co/datasets/sequelbox/Cadmium)
120
- - [sequelbox/Harlequin](https://huggingface.co/datasets/sequelbox/Harlequin)
121
- - [migtissera/Tess-v1.5](https://huggingface.co/datasets/migtissera/Tess-v1.5)
122
- - [LDJnr/Pure-Dove](https://huggingface.co/datasets/LDJnr/Pure-Dove)
123
-
124
- Additional capabilities will be added to future releases.
125
-
126
-
127
- ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/63444f2687964b331809eb55/VCJ8Fmefd8cdVhXSSxJiD.jpeg)
128
-
129
-
130
- Fireplace 2 is created by [Valiant Labs.](http://valiantlabs.ca/)
131
-
132
- [Check out our HuggingFace page for Shining Valiant 2 and our other models!](https://huggingface.co/ValiantLabs)
133
-
134
- [Follow us on X for updates on our models!](https://twitter.com/valiant_labs)
135
-
136
- We care about open source.
137
- For everyone to use.
138
-
139
  We encourage others to finetune further from our models.
 
1
+ ---
2
+ language:
3
+ - en
4
+ pipeline_tag: text-generation
5
+ tags:
6
+ - fireplace
7
+ - fireplace-2
8
+ - valiant
9
+ - valiant-labs
10
+ - llama
11
+ - llama-3.1
12
+ - llama-3.1-instruct
13
+ - llama-3.1-instruct-8b
14
+ - llama-3
15
+ - llama-3-instruct
16
+ - llama-3-instruct-8b
17
+ - 8b
18
+ - function-calling
19
+ - sql
20
+ - database
21
+ - data-visualization
22
+ - matplotlib
23
+ - json
24
+ - conversational
25
+ - chat
26
+ - instruct
27
+ model_type: llama
28
+ license: llama3.1
29
+ ---
30
+
31
+
32
+ ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/64f267a8a4f79a118e0fcc89/JYkaXrk2DqpXhaL9WymKY.jpeg)
33
+
34
+
35
+ Fireplace 2 is a chat model, adding helpful structured outputs to Llama 3.1 8b Instruct.
36
+ - an expansion pack of supplementary outputs - request them at will within your chat:
37
+ - Inline function calls
38
+ - SQL queries
39
+ - JSON objects
40
+ - Data visualization with matplotlib
41
+ - Mix normal chat and structured outputs within the same conversation.
42
+ - Fireplace 2 supplements the existing strengths of Llama 3.1, providing inline capabilities within the Llama 3 Instruct format.
43
+
44
+
45
+ ## Version
46
+
47
+ This is the **2024-07-23** release of Fireplace 2 for Llama 3.1 8b.
48
+
49
+ We're excited to bring further upgrades and releases to Fireplace 2 in the future.
50
+
51
+ Help us and recommend Fireplace 2 to your friends!
52
+
53
+
54
+ ## Prompting Guide
55
+ Fireplace uses the [Llama 3.1 Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) prompt format. The example script below can be used as a starting point for general chat with Llama 3.1 and also includes the different special tokens used for Fireplace 2's added features:
56
+
57
+
58
+ import transformers
59
+ import torch
60
+
61
+ model_id = "ValiantLabs/Llama3.1-8B-Fireplace2"
62
+
63
+ pipeline = transformers.pipeline(
64
+ "text-generation",
65
+ model=model_id,
66
+ model_kwargs={"torch_dtype": torch.bfloat16},
67
+ device_map="auto",
68
+ )
69
+
70
+ messages = [
71
+ {"role": "system", "content": "You are Fireplace, an expert technical assistant."},
72
+ {"role": "user", "content": "Hi, can you explain local area networking to me?"}, #general Llama 3.1 chat
73
+ #{"role": "user", "content": "I have the following SQL table: employees (job_id VARCHAR, salary INTEGER)\n\nCan you find all employees with a salary above $75000?<|request_sql|>"}, #for SQL query
74
+ #{"role": "user", "content": "{""name"": ""get_news_headlines"",""description"": ""Get the latest news headlines"",""parameters"": {""type"": ""object"",""properties"": {""country"": {""type"": ""string"",""description"": ""The country for which news headlines are to be retrieved""}},""required"": [""country""]}}\n\nHi, can you get me the latest news headlines for the United States?<|request_function_call|>"}, # for function call
75
+ #{"role": "user", "content": "Show me an example of a histogram with a fixed bin size. Use attractive colors.<|request_matplotlib|>"}, #for data visualization
76
+ #{"role": "user", "content": "Can you define the word 'presence' for me, thanks!<|request_json|>"}, #for JSON output
77
+ ]
78
+
79
+ outputs = pipeline(
80
+ messages,
81
+ max_new_tokens=512,
82
+ )
83
+ print(outputs[0]["generated_text"][-1])
84
+
85
+
86
+ While Fireplace 2 is trained to minimize incorrect structured outputs, they can still occur occasionally. Production uses of Fireplace 2 should verify the structure of all model outputs and remove any unneeded components of the output.
87
+
88
+ For handling of function call responses, use the [Llama 3.1 Instruct tool response style.](https://huggingface.co/blog/llama31#custom-tool-calling)
89
+
90
+
91
+ ## Special Tokens
92
+
93
+ Fireplace 2 utilizes special tokens applied to the Llama 3.1 tokenizer:
94
+
95
+ - <|request_json|>
96
+ - <|start_json|>
97
+ - <|end_json|>
98
+ - <|request_sql|>
99
+ - <|start_sql|>
100
+ - <|end_sql|>
101
+ - <|request_matplotlib|>
102
+ - <|start_matplotlib|>
103
+ - <|end_matplotlib|>
104
+ - <|request_function_call|>
105
+ - <|start_function_call|>
106
+ - <|end_function_call|>
107
+
108
+ These are supplemental to the existing special tokens used by Llama 3.1, such as <|python_tag|> and <|start_header_id|>. Fireplace 2 has been trained using the Llama 3.1 Instruct chat structure, with new special tokens added within the conversation.
109
+
110
+ The 'request' tokens are used by the user to request a specific type of structured output. They should be appended to the end of the user's message and can be alternated with normal chat responses throughout the conversation.
111
+
112
+ ## The Model
113
+ Fireplace 2 is built on top of Llama 3.1 8b Instruct.
114
+
115
+ This version of Fireplace 2 uses data from the following datasets:
116
+
117
+ - [glaiveai/glaive-function-calling-v2](https://huggingface.co/datasets/glaiveai/glaive-function-calling-v2)
118
+ - [b-mc2/sql-create-context](https://huggingface.co/datasets/b-mc2/sql-create-context)
119
+ - [sequelbox/Cadmium](https://huggingface.co/datasets/sequelbox/Cadmium)
120
+ - [sequelbox/Harlequin](https://huggingface.co/datasets/sequelbox/Harlequin)
121
+ - [migtissera/Tess-v1.5](https://huggingface.co/datasets/migtissera/Tess-v1.5)
122
+ - [LDJnr/Pure-Dove](https://huggingface.co/datasets/LDJnr/Pure-Dove)
123
+
124
+ Additional capabilities will be added to future releases.
125
+
126
+
127
+ ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/63444f2687964b331809eb55/VCJ8Fmefd8cdVhXSSxJiD.jpeg)
128
+
129
+
130
+ Fireplace 2 is created by [Valiant Labs.](http://valiantlabs.ca/)
131
+
132
+ [Check out our HuggingFace page for Shining Valiant 2 and our other models!](https://huggingface.co/ValiantLabs)
133
+
134
+ [Follow us on X for updates on our models!](https://twitter.com/valiant_labs)
135
+
136
+ We care about open source.
137
+ For everyone to use.
138
+
139
  We encourage others to finetune further from our models.