rbgo commited on
Commit
ece2623
1 Parent(s): 694c921

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +88 -10
README.md CHANGED
@@ -1,16 +1,94 @@
1
  ---
2
- license: apache-2.0
 
3
  language:
4
- - fr
5
- - it
6
- - de
7
- - es
8
  - en
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9
  ---
10
- # Model Card for Mixtral-8x7B
11
- The Mixtral-8x7B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts. The Mistral-8x7B outperforms Llama 2 70B on most benchmarks we tested.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
12
 
13
- For full details of this model please read our [release blog post](https://mistral.ai/news/mixtral-of-experts/).
14
 
15
- ## Warning
16
- This repo contains weights that are compatible with [vLLM](https://github.com/vllm-project/vllm) serving of the model as well as Hugging Face [transformers](https://github.com/huggingface/transformers) library. It is based on the original Mixtral [torrent release](magnet:?xt=urn:btih:5546272da9065eddeb6fcd7ffddeef5b75be79a7&dn=mixtral-8x7b-32kseqlen&tr=udp%3A%2F%http://2Fopentracker.i2p.rocks%3A6969%2Fannounce&tr=http%3A%2F%http://2Ftracker.openbittorrent.com%3A80%2Fannounce), but the file format and parameter names are different. Please note that model cannot (yet) be instantiated with HF.
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ base_model: mistralai/Mixtral-8x7B-v0.1
3
+ inference: false
4
  language:
 
 
 
 
5
  - en
6
+ license: apache-2.0
7
+ model-index:
8
+ - name: Mixtral-8x7B
9
+ results: []
10
+ model_creator: mistralai
11
+ model_name: Mixtral-8x7B
12
+ model_type: mixtral
13
+ prompt_template: |
14
+ <|im_start|>system
15
+ {system_message}<|im_end|>
16
+ <|im_start|>user
17
+ {prompt}<|im_end|>
18
+ <|im_start|>assistant
19
+ quantized_by: Inferless
20
+ tags:
21
+ - mixtral
22
+ - vllm
23
+ - GPTQ
24
  ---
25
+ <!-- markdownlint-disable MD041 -->
26
+
27
+ <!-- header start -->
28
+ <!-- 200823 -->
29
+ <div style="width: auto; margin-left: auto; margin-right: auto">
30
+ <img src="https://pbs.twimg.com/profile_banners/1633782755669708804/1678359514/1500x500" alt="Inferless" style="width: 100%; min-width: 400px; display: block; margin: auto;">
31
+ </div>
32
+ <div style="display: flex; justify-content: space-between; width: 100%;">
33
+ <div style="display: flex; flex-direction: column; align-items: flex-start;">
34
+ <p style="margin-top: 0.5em; margin-bottom: 0em;">Serverless GPUs to scale your machine learning inference without any hassle of managing servers, deploy complicated and custom models with ease.</p>
35
+ </div>
36
+ <!-- <div style="display: flex; flex-direction: column; align-items: flex-end;">
37
+ <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
38
+ </div> -->
39
+ </div>
40
+ <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;"><a href="https://0ooatrmbp25.typeform.com/to/nzuhQtba"><b>Join Private Beta</b></a></p></div>
41
+ <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">Go through <a href="https://tutorials.inferless.com/deploy-quantized-version-of-solar-10.7b-instruct-using-inferless">this tutorial</a>, for quickly deploy <b>Mixtral-8x7B-v0.1</b> using Inferless</p></div>
42
+ <hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
43
+ <!-- header end -->
44
+
45
+ # Mixtral-8x7B - GPTQ
46
+ - Model creator: [Mistralai](https://huggingface.co/mistralai)
47
+ - Original model: [Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1)
48
+
49
+ <!-- description start -->
50
+ ## Description
51
+
52
+ This repo contains GPTQ model files for [Mistralai's Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1).
53
+
54
+ ### About GPTQ
55
+
56
+ GPTQ is a method that compresses the model size and accelerates inference by quantizing weights based on a calibration dataset, aiming to minimize mean squared error in a single post-quantization step. GPTQ achieves both memory efficiency and faster inference.
57
+
58
+ It is supported by:
59
+
60
+ - [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
61
+ - [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types.
62
+ - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
63
+ - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers
64
+ - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
65
+
66
+ <!-- description end -->
67
+ <!-- repositories-available start -->
68
+
69
+ ## Shared files, and GPTQ parameters
70
+
71
+ Models are released as sharded safetensors files.
72
+
73
+ | Branch | Bits | GS | AWQ Dataset | Seq Len | Size |
74
+ | ------ | ---- | -- | ----------- | ------- | ---- |
75
+ | [main](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1/tree/main) | 4 | 128 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 5.96 GB
76
+
77
+ <!-- README_AWQ.md-provided-files end -->
78
 
79
+ <!-- README_AWQ.md-text-generation-webui start -->
80
 
81
+ <!-- How to use start -->
82
+ ## How to use
83
+ You will need the following software packages and python libraries:
84
+ ```json
85
+ build:
86
+ cuda_version: "12.1.1"
87
+ system_packages:
88
+ - "libssl-dev"
89
+ python_packages:
90
+ - "torch==2.1.2"
91
+ - "vllm==0.2.6"
92
+ - "transformers==4.36.2"
93
+ - "accelerate==0.25.0"
94
+ ```