liuylhf commited on
Commit
9d01b9d
·
verified ·
1 Parent(s): dbfe943

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -7
README.md CHANGED
@@ -23,19 +23,19 @@ Empower Functions is a family of LLMs(large language models) that offer GPT-4 le
23
 
24
  ## Family of Models
25
 
26
- | Model | Specs | Links | |
27
- | ------------------------ | ------------------------------------------------------------------------------------------------- | ---------------------- | ----------------------------------- |
28
- | empower-functions-small | 8k context, based on [Llama3 8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) | model, GGUF, GGUF 4bit | most cost effective, local runnable |
29
- | empower-functions-medium | 32k context, based on [Mixtral 8x7B](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) | model | balance in accuracy and cost |
30
- | empower-functions-large | 65k context, based on [Mixtral 8x22B](https://huggingface.co/mistralai/Mixtral-8x22B-v0.1) | model | best accuracy |
31
 
32
  ### Hardware Requirement
33
 
34
  We have tested and the family of models in following setup:
35
 
36
- - empower-functions-small: fp16 on 1xA100 40G, GGUF and 4bit GGUF on Macbook M2 Pro with 32G RAM, in minimal the 4bit GGUF version requires 7.56G RAM.
37
  - empower-functions-medium: fp16 on 2xA100 80G
38
- - empower-functions-large: fp16 on 4xA100 80G
39
 
40
  ## Usage
41
 
 
23
 
24
  ## Family of Models
25
 
26
+ | Model | Specs | Links | Notes |
27
+ | ------------------------------- | ------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------ |
28
+ | llama3-empower-functions-small | 8k context, based on [Llama3 8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) | [model](https://huggingface.co/empower-dev/llama3-empower-functions-small), [GGUF](https://huggingface.co/empower-dev/llama3-empower-functions-small-gguf) | Most cost-effective, locally runnable |
29
+ | empower-functions-medium | 32k context, based on [Mixtral 8x7B](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) | [model](https://huggingface.co/empower-dev/empower-functions-medium) | Balance in accuracy and cost |
30
+ | llama3-empower-functions-large | 65k context, based on [Llama3 70B](https://huggingface.co/meta-llama/Meta-Llama-3-70B) | [model](https://huggingface.co/empower-dev/llama3-empower-functions-large) | Best accuracy |
31
 
32
  ### Hardware Requirement
33
 
34
  We have tested and the family of models in following setup:
35
 
36
+ - empower-functions-small: fp16 on 1xA100 40G, GGUF and 4bit GGUF on Macbook M2 Pro with 32G RAM, in minimal the 4bit GGUF version requires 7.56G RAM.
37
  - empower-functions-medium: fp16 on 2xA100 80G
38
+ - empower-functions-large: fp16 on 4xA100 80G
39
 
40
  ## Usage
41