ricklamers davanstrien HF staff commited on
Commit
017e1c1
1 Parent(s): fbee57e

Add base_model metadata (#4)

Browse files

- Add base_model metadata (9ef2c170c72a285a4d4315193276b8e30761ce00)


Co-authored-by: Daniel van Strien <davanstrien@users.noreply.huggingface.co>

Files changed (1) hide show
  1. README.md +106 -105
README.md CHANGED
@@ -1,105 +1,106 @@
1
- ---
2
- language:
3
- - en
4
- license: llama3
5
- pipeline_tag: text-generation
6
- tags:
7
- - facebook
8
- - meta
9
- - pytorch
10
- - llama
11
- - llama-3
12
- - groq
13
- - tool-use
14
- - function-calling
15
- ---
16
-
17
- # Llama-3-70B-Tool-Use
18
-
19
- This is the 70B parameter version of the Llama 3 Groq Tool Use model, specifically designed for advanced tool use and function calling tasks.
20
-
21
- ## Model Details
22
-
23
- - **Model Type:** Causal language model fine-tuned for tool use
24
- - **Language(s):** English
25
- - **License:** Meta Llama 3 Community License
26
- - **Model Architecture:** Optimized transformer
27
- - **Training Approach:** Full fine-tuning and Direct Preference Optimization (DPO) on Llama 3 70B base model
28
- - **Input:** Text
29
- - **Output:** Text, with enhanced capabilities for tool use and function calling
30
-
31
- ## Performance
32
-
33
- - **Berkeley Function Calling Leaderboard (BFCL) Score:** 90.76% overall accuracy
34
- - This score represents the best performance among all open-source 70B LLMs on the BFCL
35
-
36
- ## Usage and Limitations
37
-
38
- This model is designed for research and development in tool use and function calling scenarios. It excels at tasks involving API interactions, structured data manipulation, and complex tool use. However, users should note:
39
-
40
- - For general knowledge or open-ended tasks, a general-purpose language model may be more suitable
41
- - The model may still produce inaccurate or biased content in some cases
42
- - Users are responsible for implementing appropriate safety measures for their specific use case
43
-
44
- Note the model is quite sensitive to the `temperature` and `top_p` sampling configuration. Start at `temperature=0.5, top_p=0.65` and move up or down as needed.
45
-
46
- Text prompt example:
47
-
48
- We'd like to give a special shoutout to [@NousResearch](https://x.com/NousResearch) for pushing open source tool use forward with their public & open exploration of tool use in LLMs.
49
-
50
- ```
51
- <|start_header_id|>system<|end_header_id|>
52
-
53
- You are a function calling AI model. You are provided with function signatures within <tools></tools> XML tags. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into functions. For each function call return a json object with function name and arguments within <tool_call></tool_call> XML tags as follows:
54
- <tool_call>
55
- {"name": <function-name>,"arguments": <args-dict>}
56
- </tool_call>
57
-
58
- Here are the available tools:
59
- <tools> {
60
- "name": "get_current_weather",
61
- "description": "Get the current weather in a given location",
62
- "parameters": {
63
- "properties": {
64
- "location": {
65
- "description": "The city and state, e.g. San Francisco, CA",
66
- "type": "string"
67
- },
68
- "unit": {
69
- "enum": [
70
- "celsius",
71
- "fahrenheit"
72
- ],
73
- "type": "string"
74
- }
75
- },
76
- "required": [
77
- "location"
78
- ],
79
- "type": "object"
80
- }
81
- } </tools><|eot_id|><|start_header_id|>user<|end_header_id|>
82
-
83
- What is the weather like in San Francisco?<|eot_id|><|start_header_id|>assistant<|end_header_id|>
84
-
85
- <tool_call>
86
- {"id":"call_deok","name":"get_current_weather","arguments":{"location":"San Francisco","unit":"celsius"}}
87
- </tool_call><|eot_id|><|start_header_id|>tool<|end_header_id|>
88
-
89
- <tool_response>
90
- {"id":"call_deok","result":{"temperature":"72","unit":"celsius"}}
91
- </tool_response><|eot_id|><|start_header_id|>assistant<|end_header_id|>
92
-
93
- ```
94
-
95
- ## Ethical Considerations
96
-
97
- While fine-tuned for tool use, this model inherits the ethical considerations of the base Llama 3 model. Use responsibly and implement additional safeguards as needed for your application.
98
-
99
- ## Availability
100
-
101
- The model is available through:
102
- - [Groq API console](https://console.groq.com)
103
- - [Hugging Face](https://huggingface.co/Groq/Llama-3-Groq-70B-Tool-Use)
104
-
105
- For full details on responsible use, ethical considerations, and latest benchmarks, please refer to the [official Llama 3 documentation](https://llama.meta.com/) and the Groq model card.
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ license: llama3
5
+ pipeline_tag: text-generation
6
+ tags:
7
+ - facebook
8
+ - meta
9
+ - pytorch
10
+ - llama
11
+ - llama-3
12
+ - groq
13
+ - tool-use
14
+ - function-calling
15
+ base_model: meta-llama/Meta-Llama-3-70B
16
+ ---
17
+
18
+ # Llama-3-70B-Tool-Use
19
+
20
+ This is the 70B parameter version of the Llama 3 Groq Tool Use model, specifically designed for advanced tool use and function calling tasks.
21
+
22
+ ## Model Details
23
+
24
+ - **Model Type:** Causal language model fine-tuned for tool use
25
+ - **Language(s):** English
26
+ - **License:** Meta Llama 3 Community License
27
+ - **Model Architecture:** Optimized transformer
28
+ - **Training Approach:** Full fine-tuning and Direct Preference Optimization (DPO) on Llama 3 70B base model
29
+ - **Input:** Text
30
+ - **Output:** Text, with enhanced capabilities for tool use and function calling
31
+
32
+ ## Performance
33
+
34
+ - **Berkeley Function Calling Leaderboard (BFCL) Score:** 90.76% overall accuracy
35
+ - This score represents the best performance among all open-source 70B LLMs on the BFCL
36
+
37
+ ## Usage and Limitations
38
+
39
+ This model is designed for research and development in tool use and function calling scenarios. It excels at tasks involving API interactions, structured data manipulation, and complex tool use. However, users should note:
40
+
41
+ - For general knowledge or open-ended tasks, a general-purpose language model may be more suitable
42
+ - The model may still produce inaccurate or biased content in some cases
43
+ - Users are responsible for implementing appropriate safety measures for their specific use case
44
+
45
+ Note the model is quite sensitive to the `temperature` and `top_p` sampling configuration. Start at `temperature=0.5, top_p=0.65` and move up or down as needed.
46
+
47
+ Text prompt example:
48
+
49
+ We'd like to give a special shoutout to [@NousResearch](https://x.com/NousResearch) for pushing open source tool use forward with their public & open exploration of tool use in LLMs.
50
+
51
+ ```
52
+ <|start_header_id|>system<|end_header_id|>
53
+
54
+ You are a function calling AI model. You are provided with function signatures within <tools></tools> XML tags. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into functions. For each function call return a json object with function name and arguments within <tool_call></tool_call> XML tags as follows:
55
+ <tool_call>
56
+ {"name": <function-name>,"arguments": <args-dict>}
57
+ </tool_call>
58
+
59
+ Here are the available tools:
60
+ <tools> {
61
+ "name": "get_current_weather",
62
+ "description": "Get the current weather in a given location",
63
+ "parameters": {
64
+ "properties": {
65
+ "location": {
66
+ "description": "The city and state, e.g. San Francisco, CA",
67
+ "type": "string"
68
+ },
69
+ "unit": {
70
+ "enum": [
71
+ "celsius",
72
+ "fahrenheit"
73
+ ],
74
+ "type": "string"
75
+ }
76
+ },
77
+ "required": [
78
+ "location"
79
+ ],
80
+ "type": "object"
81
+ }
82
+ } </tools><|eot_id|><|start_header_id|>user<|end_header_id|>
83
+
84
+ What is the weather like in San Francisco?<|eot_id|><|start_header_id|>assistant<|end_header_id|>
85
+
86
+ <tool_call>
87
+ {"id":"call_deok","name":"get_current_weather","arguments":{"location":"San Francisco","unit":"celsius"}}
88
+ </tool_call><|eot_id|><|start_header_id|>tool<|end_header_id|>
89
+
90
+ <tool_response>
91
+ {"id":"call_deok","result":{"temperature":"72","unit":"celsius"}}
92
+ </tool_response><|eot_id|><|start_header_id|>assistant<|end_header_id|>
93
+
94
+ ```
95
+
96
+ ## Ethical Considerations
97
+
98
+ While fine-tuned for tool use, this model inherits the ethical considerations of the base Llama 3 model. Use responsibly and implement additional safeguards as needed for your application.
99
+
100
+ ## Availability
101
+
102
+ The model is available through:
103
+ - [Groq API console](https://console.groq.com)
104
+ - [Hugging Face](https://huggingface.co/Groq/Llama-3-Groq-70B-Tool-Use)
105
+
106
+ For full details on responsible use, ethical considerations, and latest benchmarks, please refer to the [official Llama 3 documentation](https://llama.meta.com/) and the Groq model card.