liuylhf commited on
Commit
38b30aa
·
verified ·
1 Parent(s): 78d3071

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +47 -3
README.md CHANGED
@@ -1,3 +1,47 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - function
5
+ - function-calling
6
+ - tool-using
7
+ ---
8
+
9
+ ## Empower Functions Model
10
+
11
+ [https://github.com/empower-ai/empower-functions](https://github.com/empower-ai/empower-functions)
12
+
13
+ Empower Functions is a family of LLMs(large language models) that offer GPT-4 level capabilities for real-world "tool using" use cases, with full compatibility support to be served as a drop-in replacement.
14
+
15
+
16
+ ## Table of Contents
17
+
18
+ ## Key Features
19
+ * Automatic tool using, able to decide when to use tools and when to converse, optimized for long conversations
20
+ * Parallel call, supports calling one function multiple times, multiple functions, or a combination of both
21
+ * Sequential calling, supports calling multiple functions sequentially to fulfill the user request
22
+ * Streaming
23
+
24
+ ## Family of Models
25
+
26
+ | Model | Specs | Links | |
27
+ | ------------------------ | ------------------------------------------------------------------------------------------------- | ---------------------- | ----------------------------------- |
28
+ | empower-functions-small | 8k context, based on [Llama3 8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) | model, GGUF, GGUF 4bit | most cost effective, local runnable |
29
+ | empower-functions-medium | 32k context, based on [Mixtral 8x7B](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) | model | balance in accuracy and cost |
30
+ | empower-functions-large | 65k context, based on [Mixtral 8x22B](https://huggingface.co/mistralai/Mixtral-8x22B-v0.1) | model | best accuracy |
31
+
32
+ ### Hardware Requirement
33
+
34
+ We have tested and the family of models in following setup:
35
+
36
+ - empower-functions-small: fp16 on 1xA100 40G, GGUF and 4bit GGUF on Macbook M2 Pro with 32G RAM, in minimal the 4bit GGUF version requires 7.56G RAM.
37
+ - empower-functions-medium: fp16 on 2xA100 80G
38
+ - empower-functions-large: fp16 on 4xA100 80G
39
+
40
+ ## Usage
41
+
42
+ There are three ways to use the empower-functions model. You can either directly [prompt the raw model](https://github.com/empower-ai/empower-functions?tab=readme-ov-file#prompt-raw-model), run it [locally](https://github.com/empower-ai/empower-functions?tab=readme-ov-file#running-locally) through llama-cpp-python, or using our [hosted API](https://github.com/empower-ai/empower-functions?tab=readme-ov-file#using-empower-api)
43
+
44
+ ## Demo App
45
+ Check our healthcare appointment booking [demo](https://app.empower.dev/chat-demo)
46
+
47
+ Want to customize the model? Please contact us at [founders@empower.dev](mailto:founders@empower.dev)