Delete all.
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- LICENSE.txt +0 -176
- NOTICE.txt +0 -1
- README.md +0 -159
- __init__.py +0 -2
- config.json +0 -39
- configuration_dbrx.py +0 -264
- generation_config.json +0 -4
- model-00001-of-00054.safetensors +0 -3
- model-00002-of-00054.safetensors +0 -3
- model-00003-of-00054.safetensors +0 -3
- model-00004-of-00054.safetensors +0 -3
- model-00005-of-00054.safetensors +0 -3
- model-00006-of-00054.safetensors +0 -3
- model-00007-of-00054.safetensors +0 -3
- model-00008-of-00054.safetensors +0 -3
- model-00009-of-00054.safetensors +0 -3
- model-00010-of-00054.safetensors +0 -3
- model-00011-of-00054.safetensors +0 -3
- model-00012-of-00054.safetensors +0 -3
- model-00013-of-00054.safetensors +0 -3
- model-00014-of-00054.safetensors +0 -3
- model-00015-of-00054.safetensors +0 -3
- model-00016-of-00054.safetensors +0 -3
- model-00017-of-00054.safetensors +0 -3
- model-00018-of-00054.safetensors +0 -3
- model-00019-of-00054.safetensors +0 -3
- model-00020-of-00054.safetensors +0 -3
- model-00021-of-00054.safetensors +0 -3
- model-00022-of-00054.safetensors +0 -3
- model-00023-of-00054.safetensors +0 -3
- model-00024-of-00054.safetensors +0 -3
- model-00025-of-00054.safetensors +0 -3
- model-00026-of-00054.safetensors +0 -3
- model-00027-of-00054.safetensors +0 -3
- model-00028-of-00054.safetensors +0 -3
- model-00029-of-00054.safetensors +0 -3
- model-00030-of-00054.safetensors +0 -3
- model-00031-of-00054.safetensors +0 -3
- model-00032-of-00054.safetensors +0 -3
- model-00033-of-00054.safetensors +0 -3
- model-00034-of-00054.safetensors +0 -3
- model-00035-of-00054.safetensors +0 -3
- model-00036-of-00054.safetensors +0 -3
- model-00037-of-00054.safetensors +0 -3
- model-00038-of-00054.safetensors +0 -3
- model-00039-of-00054.safetensors +0 -3
- model-00040-of-00054.safetensors +0 -3
- model-00041-of-00054.safetensors +0 -3
- model-00042-of-00054.safetensors +0 -3
- model-00043-of-00054.safetensors +0 -3
LICENSE.txt
DELETED
@@ -1,176 +0,0 @@
|
|
1 |
-
Databricks Open Model License
|
2 |
-
|
3 |
-
By using, reproducing, modifying, distributing, performing or displaying
|
4 |
-
any portion or element of DBRX or DBRX Derivatives, or otherwise accepting
|
5 |
-
the terms of this Agreement, you agree to be bound by this Agreement.
|
6 |
-
|
7 |
-
Version Release Date: March 27, 2024
|
8 |
-
|
9 |
-
|
10 |
-
Section 1: Definitions
|
11 |
-
|
12 |
-
“Agreement” means these terms and conditions that govern the use, reproduction,
|
13 |
-
modification, distribution, performance or display of DBRX and/or DBRX
|
14 |
-
Derivatives and any terms and conditions incorporated by reference.
|
15 |
-
|
16 |
-
“Databricks” or “we” means Databricks, Inc.
|
17 |
-
|
18 |
-
“Licensee” or “you” means you, or your employer or any other person or entity
|
19 |
-
(if you are entering into this Agreement on such person or entity’s behalf),
|
20 |
-
of the age required under applicable laws, rules or regulations to provide
|
21 |
-
legal consent and that has legal authority to bind your employer or such other
|
22 |
-
person or entity if you are entering in this Agreement on their behalf.
|
23 |
-
|
24 |
-
“DBRX Derivatives” means all (i) modifications to DBRX, (ii) works based on
|
25 |
-
DBRX and (iii) any other derivative works thereof. Outputs are not deemed DBRX
|
26 |
-
Derivatives.
|
27 |
-
|
28 |
-
“DBRX” means the foundational large language models and software and
|
29 |
-
algorithms, including machine-learning model code, trained model weights,
|
30 |
-
inference-enabling code, training-enabling code, fine-tuning enabling code,
|
31 |
-
documentation and other elements of the foregoing identified by Databricks at
|
32 |
-
https://github.com/databricks/dbrx, regardless of the source that you obtained
|
33 |
-
it from.
|
34 |
-
|
35 |
-
“Output” means the results of operating DBRX or DBRX Derivatives.
|
36 |
-
|
37 |
-
As used in this Agreement, “including” means “including without limitation.”
|
38 |
-
|
39 |
-
|
40 |
-
Section 2: License Rights and Conditions on Use and Distribution
|
41 |
-
|
42 |
-
2.1 Grant of Rights
|
43 |
-
|
44 |
-
You are granted a non-exclusive, worldwide, non-transferable and royalty-free
|
45 |
-
limited license under Databricks’ intellectual property or other rights owned
|
46 |
-
by Databricks embodied in DBRX to use, reproduce, distribute, copy, modify,
|
47 |
-
and create derivative works of DBRX in accordance with the terms of this
|
48 |
-
Agreement.
|
49 |
-
|
50 |
-
2.2 Reproduction and Distribution
|
51 |
-
|
52 |
-
1. All distributions of DBRX or DBRX Derivatives must be accompanied by a
|
53 |
-
"Notice" text file that contains the following notice: "DBRX is provided
|
54 |
-
under and subject to the Databricks Open Model License, Copyright ©
|
55 |
-
Databricks, Inc. All rights reserved."
|
56 |
-
|
57 |
-
2. If you distribute or make DBRX or DBRX Derivatives available to a third
|
58 |
-
party, you must provide a copy of this Agreement to such third party.
|
59 |
-
|
60 |
-
3. You must cause any modified files that you distribute to carry prominent
|
61 |
-
notices stating that you modified the files.
|
62 |
-
|
63 |
-
You may add your own intellectual property statement to your modifications of
|
64 |
-
DBRX and, except as set forth in this Section, may provide additional or
|
65 |
-
different terms and conditions for use, reproduction, or distribution of DBRX
|
66 |
-
or DBRX Derivatives as a whole, provided your use, reproduction, modification,
|
67 |
-
distribution, performance, and display of DBRX or DBRX Derivatives otherwise
|
68 |
-
complies with the terms and conditions of this Agreement. Any additional or
|
69 |
-
different terms and conditions you impose must not conflict with the terms of
|
70 |
-
this Agreement and in the event of a conflict, the terms and conditions of this
|
71 |
-
Agreement shall govern over any such additional or different terms and conditions.
|
72 |
-
|
73 |
-
2.3 Use Restrictions
|
74 |
-
|
75 |
-
You will not use DBRX or DBRX Derivatives or any Output to improve any other
|
76 |
-
large language model (excluding DBRX or DBRX Derivatives).
|
77 |
-
|
78 |
-
You will not use DBRX or DBRX Derivatives:
|
79 |
-
|
80 |
-
1. for any restricted use set forth in the Databricks Open Model Acceptable
|
81 |
-
Use Policy identified at
|
82 |
-
https://www.databricks.com/legal/acceptable-use-policy-open-model
|
83 |
-
("Acceptable Use Policy"), which is hereby incorporated by reference into
|
84 |
-
this Agreement; or
|
85 |
-
|
86 |
-
2. in violation of applicable laws and regulations.
|
87 |
-
|
88 |
-
To the maximum extent permitted by law, Databricks reserves the right to
|
89 |
-
restrict (remotely or otherwise) usage of DBRX or DBRX Derivatives that
|
90 |
-
Databricks reasonably believes are in violation of this Agreement.
|
91 |
-
|
92 |
-
|
93 |
-
Section 3: Additional Commercial Terms
|
94 |
-
|
95 |
-
If, on the DBRX version release date, the monthly active users of the products
|
96 |
-
or services made available by or for Licensee, or Licensee’s affiliates, is
|
97 |
-
greater than 700 million monthly active users in the preceding calendar month,
|
98 |
-
you must request a license from Databricks, which we may grant to you in our
|
99 |
-
sole discretion, and you are not authorized to exercise any of the rights under
|
100 |
-
this Agreement unless or until Databricks otherwise expressly grants you such
|
101 |
-
rights.
|
102 |
-
|
103 |
-
If you receive DBRX or DBRX Derivatives from a direct or indirect licensee as
|
104 |
-
part of an integrated end user product, then this section (Section 3) of the
|
105 |
-
Agreement will not apply to you.
|
106 |
-
|
107 |
-
|
108 |
-
Section 4: Additional Provisions
|
109 |
-
|
110 |
-
4.1 Updates
|
111 |
-
|
112 |
-
Databricks may update DBRX from time to time, and you must make reasonable
|
113 |
-
efforts to use the latest version of DBRX.
|
114 |
-
|
115 |
-
4.2 Intellectual Property
|
116 |
-
|
117 |
-
a. No trademark licenses are granted under this Agreement, and in connection
|
118 |
-
with DBRX or DBRX Derivatives, neither Databricks nor Licensee may use any name
|
119 |
-
or mark owned by or associated with the other or any of its affiliates, except
|
120 |
-
as required for reasonable and customary use in describing and redistributing
|
121 |
-
DBRX or DBRX Derivatives.
|
122 |
-
|
123 |
-
b. Subject to Databricks’ ownership of DBRX and DRBX Derivatives made by or for
|
124 |
-
Databricks, with respect to any DBRX Derivatives that are made by you, as
|
125 |
-
between you and Databricks, you are and will be the owner of such DBRX
|
126 |
-
Derivatives.
|
127 |
-
|
128 |
-
c. Databricks claims no ownership rights in Outputs. You are responsible for
|
129 |
-
Outputs and their subsequent uses.
|
130 |
-
|
131 |
-
d. If you institute litigation or other proceedings against Databricks or any
|
132 |
-
entity (including a cross-claim or counterclaim in a lawsuit) alleging that
|
133 |
-
DBRX or Outputs or results therefrom, or any portion of any of the foregoing,
|
134 |
-
constitutes infringement of intellectual property or other rights owned or
|
135 |
-
licensable by you, then any licenses granted to you under this Agreement shall
|
136 |
-
terminate as of the date such litigation or claim is filed or instituted. You
|
137 |
-
will indemnify and hold harmless Databricks from and against any claim by any
|
138 |
-
third party arising out of or related to your use or distribution of DBRX or
|
139 |
-
DBRX Derivatives.
|
140 |
-
|
141 |
-
4.3 DISCLAIMER OF WARRANTY
|
142 |
-
|
143 |
-
UNLESS REQUIRED BY APPLICABLE LAW, DBRX AND ANY OUTPUT AND RESULTS THEREFROM
|
144 |
-
ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, EITHER
|
145 |
-
EXPRESS OR IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE,
|
146 |
-
NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU
|
147 |
-
ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR
|
148 |
-
REDISTRIBUTING DBRX OR DBRX DERIVATIVES AND ANY OUTPUT AND ASSUME ANY RISKS
|
149 |
-
ASSOCIATED WITH YOUR USE OF DBRX OR DBRX DERIVATIVES AND ANY OUTPUT AND RESULTS.
|
150 |
-
|
151 |
-
4.4 LIMITATION OF LIABILITY
|
152 |
-
|
153 |
-
IN NO EVENT WILL DATABRICKS OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF
|
154 |
-
LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR
|
155 |
-
OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT,
|
156 |
-
SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF
|
157 |
-
DATABRICKS OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE
|
158 |
-
FOREGOING.
|
159 |
-
|
160 |
-
4.5 Term and Termination
|
161 |
-
|
162 |
-
The term of this Agreement will commence upon your acceptance of this Agreement
|
163 |
-
or access to DBRX or DBRX Derivatives and will continue in full force and
|
164 |
-
effect until terminated in accordance with the terms and conditions herein.
|
165 |
-
Databricks may terminate this Agreement if you are in breach of any term or
|
166 |
-
condition of this Agreement. Upon termination of this Agreement, you shall
|
167 |
-
delete and cease use of DBRX or any DBRX Derivatives. Sections 1, 4.2(d), 4.3,
|
168 |
-
4.4, and 4.6 shall survive the termination of this Agreement.
|
169 |
-
|
170 |
-
4.6 Governing Law and Jurisdiction
|
171 |
-
|
172 |
-
This Agreement will be governed and construed under the laws of the State of
|
173 |
-
California without regard to choice of law principles, and the UN Convention
|
174 |
-
on Contracts for the International Sale of Goods does not apply to this
|
175 |
-
Agreement. The courts of California shall have exclusive jurisdiction of any
|
176 |
-
dispute arising out of this Agreement.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
NOTICE.txt
DELETED
@@ -1 +0,0 @@
|
|
1 |
-
DBRX is provided under and subject to the Databricks Open Model License, Copyright © Databricks, Inc. All rights reserved.
|
|
|
|
README.md
DELETED
@@ -1,159 +0,0 @@
|
|
1 |
-
---
|
2 |
-
inference: false
|
3 |
-
license: other
|
4 |
-
license_name: databricks-open-model-license
|
5 |
-
license_link: https://www.databricks.com/legal/open-model-license
|
6 |
-
---
|
7 |
-
# Fix for the DBRX Code
|
8 |
-
The original DBRX implementation code has a few bugs which only affect training, which I fixed in this re-upload.
|
9 |
-
|
10 |
-
The issues - How I fixed them:
|
11 |
-
1. Error when using gradient checkpointing - Fixed by using positional arguments instead because `_gradient_checkpointing_func` doesn't support kwargs.
|
12 |
-
2. VRAM usage go zoom and `CUDA Out of Memory` when backpropping through the MLP layer - Fixed by separating the experts' weights into different tensors instead of using a single tensor for all the experts. IDK why this fixed it but **maybe** it's because torch is trying to compute gradient for every expert at once, which shouldn't happen since it's a MoE model.
|
13 |
-
|
14 |
-
# DBRX Base
|
15 |
-
|
16 |
-
* DBRX Base is a mixture-of-experts (MoE) large language model trained from scratch by Databricks.
|
17 |
-
* We are releasing both DBRX Base, a pretrained base model, and DBRX Instruct, a fine-tuned version for few-turn interactions, under [an open license](https://www.databricks.com/legal/open-model-license).
|
18 |
-
* This is the repository for DBRX Base. DBRX Instruct can be found [here](https://huggingface.co/databricks/dbrx-instruct).
|
19 |
-
* For full details on the DBRX models, please read our [technical blog post](https://www.databricks.com/blog/introducing-dbrx-new-state-art-open-llm).
|
20 |
-
|
21 |
-
|
22 |
-
## Model Overview
|
23 |
-
DBRX is a [transformer-based](https://www.isattentionallyouneed.com/) decoder-only large language model (LLM) that was trained using next-token prediction.
|
24 |
-
It uses a *fine-grained* mixture-of-experts (MoE) architecture with 132B total parameters of which 36B parameters are active on any input.
|
25 |
-
It was pre-trained on 12T tokens of text and code data.
|
26 |
-
Compared to other open MoE models like Mixtral-8x7B and Grok-1, DBRX is fine-grained, meaning it uses a larger number of smaller experts. DBRX has 16 experts and chooses 4, while Mixtral-8x7B and Grok-1 have 8 experts and choose 2.
|
27 |
-
This provides 65x more possible combinations of experts and we found that this improves model quality.
|
28 |
-
DBRX uses rotary position encodings (RoPE), gated linear units (GLU), and grouped query attention (GQA).
|
29 |
-
It uses the GPT-4 tokenizer as provided in the [tiktoken](https://github.com/openai/tiktoken) repository.
|
30 |
-
We made these choices based on exhaustive evaluation and scaling experiments.
|
31 |
-
|
32 |
-
DBRX was pretrained on 12T tokens of carefully curated data and a maximum context length of 32K tokens.
|
33 |
-
We estimate that this data is at least 2x better token-for-token than the data we used to pretrain the MPT family of models.
|
34 |
-
This new dataset was developed using the full suite of Databricks tools, including Apache Spark™ and Databricks notebooks for data processing, and Unity Catalog for data management and governance.
|
35 |
-
We used curriculum learning for pretraining, changing the data mix during training in ways we found to substantially improve model quality.
|
36 |
-
|
37 |
-
* **Inputs:** DBRX only accepts text-based inputs and accepts a context length of up to 32768 tokens.
|
38 |
-
* **Outputs:** DBRX only produces text-based outputs.
|
39 |
-
* **Model Architecture:** More detailed information about DBRX Instruct and DBRX Base can be found in our [technical blog post](https://www.databricks.com/blog/introducing-dbrx-new-state-art-open-llm).
|
40 |
-
* **License:** [Databricks Open Model License](https://www.databricks.com/legal/open-model-license)
|
41 |
-
* **Acceptable Use Policy:** [Databricks Open Model Acceptable Use Policy](https://www.databricks.com/legal/acceptable-use-policy-open-model)
|
42 |
-
* **Version:** 1.0
|
43 |
-
* **Owner:** Databricks, Inc.
|
44 |
-
|
45 |
-
|
46 |
-
## Usage
|
47 |
-
These are several general ways to use the DBRX models:
|
48 |
-
* DBRX Base and DBRX Instruct are available for download on HuggingFace (see our Quickstart guide below). This is the HF repository for DBRX Base; DBRX Instruct can be found [here](https://huggingface.co/databricks/dbrx-instruct).
|
49 |
-
* The DBRX model repository can be found on GitHub [here](https://github.com/databricks/dbrx).
|
50 |
-
* DBRX Base and DBRX Instruct are available with [Databricks Foundation Model APIs](https://docs.databricks.com/en/machine-learning/foundation-models/index.html) via both *Pay-per-token* and *Provisioned Throughput* endpoints. These are enterprise-ready deployments.
|
51 |
-
* For more information on how to fine-tune using LLM-Foundry, please take a look at our LLM pretraining and fine-tuning [documentation](https://github.com/mosaicml/llm-foundry/blob/main/scripts/train/README.md).
|
52 |
-
|
53 |
-
|
54 |
-
## Quickstart Guide
|
55 |
-
**NOTE: This is DBRX Base, and has not been instruction finetuned. It has not been trained for interactive chat and is only a completion model.**
|
56 |
-
If you are looking for the finetuned model, please use [DBRX Instruct](https://huggingface.co/databricks/dbrx-instruct).
|
57 |
-
|
58 |
-
Getting started with DBRX models is easy with the `transformers` library. The model requires ~264GB of RAM and the following packages:
|
59 |
-
|
60 |
-
```bash
|
61 |
-
pip install transformers tiktoken
|
62 |
-
```
|
63 |
-
|
64 |
-
If you'd like to speed up download time, you can use the `hf_transfer` package as described by Huggingface [here](https://huggingface.co/docs/huggingface_hub/en/guides/download#faster-downloads).
|
65 |
-
```bash
|
66 |
-
pip install hf_transfer
|
67 |
-
export HF_HUB_ENABLE_HF_TRANSFER=1
|
68 |
-
```
|
69 |
-
|
70 |
-
### Run the model on a CPU:
|
71 |
-
```python
|
72 |
-
from transformers import AutoTokenizer, AutoModelForCausalLM
|
73 |
-
import torch
|
74 |
-
|
75 |
-
tokenizer = AutoTokenizer.from_pretrained("v2ray/dbrx-base-fixed", trust_remote_code=True)
|
76 |
-
model = AutoModelForCausalLM.from_pretrained("v2ray/dbrx-base-fixed", device_map="cpu", torch_dtype=torch.bfloat16, trust_remote_code=True)
|
77 |
-
|
78 |
-
input_text = "Databricks was founded in "
|
79 |
-
input_ids = tokenizer(input_text, return_tensors="pt")
|
80 |
-
|
81 |
-
outputs = model.generate(**input_ids, max_new_tokens=100)
|
82 |
-
print(tokenizer.decode(outputs[0]))
|
83 |
-
```
|
84 |
-
|
85 |
-
### Run the model on multiple GPUs:
|
86 |
-
```python
|
87 |
-
from transformers import AutoTokenizer, AutoModelForCausalLM
|
88 |
-
import torch
|
89 |
-
|
90 |
-
tokenizer = AutoTokenizer.from_pretrained("v2ray/dbrx-base-fixed", trust_remote_code=True)
|
91 |
-
model = AutoModelForCausalLM.from_pretrained("v2ray/dbrx-base-fixed", device_map="auto", torch_dtype=torch.bfloat16, trust_remote_code=True)
|
92 |
-
|
93 |
-
input_text = "Databricks was founded in "
|
94 |
-
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
|
95 |
-
|
96 |
-
outputs = model.generate(**input_ids, max_new_tokens=100)
|
97 |
-
print(tokenizer.decode(outputs[0]))
|
98 |
-
```
|
99 |
-
If your GPU system supports [FlashAttention2](https://huggingface.co/docs/transformers/perf_infer_gpu_one#flashattention-2), you can add `attn_implementation=”flash_attention_2”` as a keyword to `AutoModelForCausalLM.from_pretrained()` to achieve faster inference.
|
100 |
-
|
101 |
-
|
102 |
-
## Limitations and Ethical Considerations
|
103 |
-
### Training Dataset Limitations
|
104 |
-
The DBRX models were trained on 12T tokens of text, with a knowledge cutoff date of December 2023.
|
105 |
-
|
106 |
-
The training mix used for DBRX contains both natural-language and code examples. The vast majority of our training data is in the English language. We did not test DBRX for non-English proficiency. Therefore, DBRX should be considered a generalist model for text-based use in the English language.
|
107 |
-
|
108 |
-
DBRX does not have multimodal capabilities.
|
109 |
-
|
110 |
-
### Associated Risks and Recommendations
|
111 |
-
All foundation models are novel technologies that carry various risks, and may output information that is inaccurate, incomplete, biased, or offensive.
|
112 |
-
Users should exercise judgment and evaluate such output for accuracy and appropriateness for their desired use case before using or sharing it.
|
113 |
-
Databricks recommends [using retrieval augmented generation (RAG)](https://www.databricks.com/glossary/retrieval-augmented-generation-rag) in scenarios where accuracy and fidelity are important.
|
114 |
-
We also recommend that anyone using or fine-tuning either DBRX Base or DBRX Instruct perform additional testing around safety in the context of their particular application and domain.
|
115 |
-
|
116 |
-
|
117 |
-
## Intended Uses
|
118 |
-
### Intended Use Cases
|
119 |
-
The DBRX models are open, general-purpose LLMs intended and licensed for both commercial and research applications.
|
120 |
-
They can be further fine-tuned for various domain-specific natural language and coding tasks.
|
121 |
-
DBRX Base can be used as an off-the-shelf model for text completion for general English-language and coding tasks.
|
122 |
-
|
123 |
-
Please review the Associated Risks section above, as well as the [Databricks Open Model License](https://www.databricks.com/legal/open-model-license) and [Databricks Open Model Acceptable Use Policy](https://www.databricks.com/legal/acceptable-use-policy-open-model) for further information about permissible uses of DBRX Base and its derivatives.
|
124 |
-
|
125 |
-
### Out-of-Scope Use Cases
|
126 |
-
DBRX models are not intended to be used out-of-the-box in non-English languages and do not support native code execution, or other forms of function-calling.
|
127 |
-
DBRX models should not be used in any manner that violates applicable laws or regulations or in any other way that is prohibited by the [Databricks Open Model License](https://www.databricks.com/legal/open-model-license) and [Databricks Open Model Acceptable Use Policy](https://www.databricks.com/legal/acceptable-use-policy-open-model).
|
128 |
-
|
129 |
-
|
130 |
-
## Training Stack
|
131 |
-
MoE models are complicated to train, and the training of DBRX Base and DBRX Instruct was heavily supported by Databricks’ infrastructure for data processing and large-scale LLM training (e.g., [Composer](https://github.com/mosaicml/composer), [Streaming](https://github.com/mosaicml/streaming), [Megablocks](https://github.com/stanford-futuredata/megablocks), and [LLM Foundry](https://github.com/mosaicml/llm-foundry)).
|
132 |
-
|
133 |
-
Composer is our core library for large-scale training.
|
134 |
-
It provides an optimized training loop, easy [checkpointing](https://docs.mosaicml.com/projects/composer/en/latest/trainer/checkpointing.html) and [logging](https://docs.mosaicml.com/projects/composer/en/latest/trainer/logging.html#wood-logging),
|
135 |
-
[FSDP](https://pytorch.org/docs/stable/fsdp.html)-based [model sharding](https://docs.mosaicml.com/projects/composer/en/latest/notes/distributed_training.html#fullyshardeddataparallel-fsdp),
|
136 |
-
convenient [abstractions](https://docs.mosaicml.com/projects/composer/en/latest/trainer/time.html), extreme customizability via [callbacks](https://docs.mosaicml.com/projects/composer/en/latest/trainer/callbacks.html), and more.
|
137 |
-
|
138 |
-
Streaming enables fast, low cost, and scalable training on large datasets from cloud storage. It handles a variety of challenges around deterministic resumption as node counts change, avoiding redundant downloads across devices, high-quality shuffling at scale, sample-level random access, and speed.
|
139 |
-
|
140 |
-
Megablocks is a lightweight library for MoE training. Crucially, it supports “dropless MoE,” which avoids inefficient padding and is intended to provide deterministic outputs for a given sequence no matter what other sequences are in the batch.
|
141 |
-
|
142 |
-
LLM Foundry ties all of these libraries together to create a simple LLM pretraining, fine-tuning, and inference experience.
|
143 |
-
|
144 |
-
DBRX was trained using proprietary optimized versions of the above open source libraries, along with our [LLM training platform](https://www.databricks.com/product/machine-learning/mosaic-ai-training).
|
145 |
-
|
146 |
-
|
147 |
-
## Evaluation
|
148 |
-
We find that DBRX outperforms established open-source and open-weight base models on the [Databricks Model Gauntlet](https://www.databricks.com/blog/llm-evaluation-for-icl), the [Hugging Face Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard), and HumanEval.
|
149 |
-
The Databricks Model Gauntlet measures performance on more than 30 tasks across six categories: world knowledge, common sense reasoning, language understanding, reading comprehension, symbolic problem solving, and programming.
|
150 |
-
The Hugging Face Open LLM Leaderboard measures the average of ARC-Challenge, HellaSwag, MMLU, TruthfulQA, Winogrande and GSM8k.
|
151 |
-
HumanEval measures coding ability.
|
152 |
-
|
153 |
-
Full evaluation details can be found in our [technical blog post](https://www.databricks.com/blog/introducing-dbrx-new-state-art-open-llm).
|
154 |
-
|
155 |
-
|
156 |
-
## Acknowledgements
|
157 |
-
The DBRX models were made possible thanks in large part to the open-source community, especially:
|
158 |
-
* The [MegaBlocks](https://arxiv.org/abs/2211.15841) library, which established a foundation for our MoE implementation.
|
159 |
-
* [PyTorch FSDP](https://arxiv.org/abs/2304.11277), which we built on for distributed training.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
__init__.py
DELETED
@@ -1,2 +0,0 @@
|
|
1 |
-
from .configuration_dbrx import *
|
2 |
-
from .modeling_dbrx import *
|
|
|
|
|
|
config.json
DELETED
@@ -1,39 +0,0 @@
|
|
1 |
-
{
|
2 |
-
"_name_or_path": "dbrx",
|
3 |
-
"architectures": [
|
4 |
-
"DbrxForCausalLM"
|
5 |
-
],
|
6 |
-
"attn_config": {
|
7 |
-
"clip_qkv": 8,
|
8 |
-
"kv_n_heads": 8,
|
9 |
-
"model_type": "",
|
10 |
-
"rope_theta": 500000
|
11 |
-
},
|
12 |
-
"auto_map": {
|
13 |
-
"AutoConfig": "configuration_dbrx.DbrxConfig",
|
14 |
-
"AutoModelForCausalLM": "modeling_dbrx.DbrxForCausalLM"
|
15 |
-
},
|
16 |
-
"d_model": 6144,
|
17 |
-
"emb_pdrop": 0.0,
|
18 |
-
"ffn_config": {
|
19 |
-
"ffn_hidden_size": 10752,
|
20 |
-
"model_type": "",
|
21 |
-
"moe_jitter_eps": 0.01,
|
22 |
-
"moe_loss_weight": 0.05,
|
23 |
-
"moe_num_experts": 16,
|
24 |
-
"moe_top_k": 4
|
25 |
-
},
|
26 |
-
"initializer_range": 0.02,
|
27 |
-
"max_seq_len": 32768,
|
28 |
-
"model_type": "dbrx",
|
29 |
-
"n_heads": 48,
|
30 |
-
"n_layers": 40,
|
31 |
-
"output_router_logits": false,
|
32 |
-
"resid_pdrop": 0.0,
|
33 |
-
"router_aux_loss_coef": 0.05,
|
34 |
-
"tie_word_embeddings": false,
|
35 |
-
"torch_dtype": "bfloat16",
|
36 |
-
"transformers_version": "4.39.1",
|
37 |
-
"use_cache": true,
|
38 |
-
"vocab_size": 100352
|
39 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
configuration_dbrx.py
DELETED
@@ -1,264 +0,0 @@
|
|
1 |
-
"""Dbrx configuration."""
|
2 |
-
from typing import Any, Optional
|
3 |
-
|
4 |
-
from transformers.configuration_utils import PretrainedConfig
|
5 |
-
from transformers.utils import logging
|
6 |
-
|
7 |
-
logger = logging.get_logger(__name__)
|
8 |
-
|
9 |
-
DBRX_PRETRAINED_CONFIG_ARCHIVE_MAP = {}
|
10 |
-
|
11 |
-
|
12 |
-
class DbrxAttentionConfig(PretrainedConfig):
|
13 |
-
"""Configuration class for Dbrx Attention.
|
14 |
-
|
15 |
-
[`DbrxAttention`] class. It is used to instantiate attention layers
|
16 |
-
according to the specified arguments, defining the layers architecture.
|
17 |
-
|
18 |
-
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
|
19 |
-
documentation from [`PretrainedConfig`] for more information.
|
20 |
-
|
21 |
-
Args:
|
22 |
-
attn_pdrop (`float`, *optional*, defaults to 0.0):
|
23 |
-
The dropout probability for the attention layers.
|
24 |
-
clip_qkv (`float`, *optional*, defualts to None):
|
25 |
-
If not `None`, clip the queries, keys, and values in the attention layer to this value.
|
26 |
-
kv_n_heads (Optional[int]): For grouped_query_attention only, allow user to specify number of kv heads.
|
27 |
-
rope_theta (float): The base frequency for rope.
|
28 |
-
"""
|
29 |
-
|
30 |
-
def __init__(
|
31 |
-
self,
|
32 |
-
attn_pdrop: float = 0,
|
33 |
-
clip_qkv: Optional[float] = None,
|
34 |
-
kv_n_heads: int = 1,
|
35 |
-
rope_theta: float = 10000.0,
|
36 |
-
**kwargs: Any,
|
37 |
-
):
|
38 |
-
super().__init__(**kwargs)
|
39 |
-
self.attn_pdrop = attn_pdrop
|
40 |
-
self.clip_qkv = clip_qkv
|
41 |
-
self.kv_n_heads = kv_n_heads
|
42 |
-
self.rope_theta = rope_theta
|
43 |
-
|
44 |
-
for k in ['model_type']:
|
45 |
-
if k in kwargs:
|
46 |
-
kwargs.pop(k)
|
47 |
-
if len(kwargs) != 0:
|
48 |
-
raise ValueError(f'Found unknown {kwargs=}')
|
49 |
-
|
50 |
-
@classmethod
|
51 |
-
def from_pretrained(cls, pretrained_model_name_or_path: str,
|
52 |
-
**kwargs: Any) -> 'PretrainedConfig':
|
53 |
-
cls._set_token_in_kwargs(kwargs)
|
54 |
-
|
55 |
-
config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path,
|
56 |
-
**kwargs)
|
57 |
-
|
58 |
-
if config_dict.get('model_type') == 'dbrx':
|
59 |
-
config_dict = config_dict['attn_config']
|
60 |
-
|
61 |
-
if 'model_type' in config_dict and hasattr(
|
62 |
-
cls,
|
63 |
-
'model_type') and config_dict['model_type'] != cls.model_type:
|
64 |
-
logger.warning(
|
65 |
-
f"You are using a model of type {config_dict['model_type']} to instantiate a model of type "
|
66 |
-
+
|
67 |
-
f'{cls.model_type}. This is not supported for all configurations of models and can yield errors.'
|
68 |
-
)
|
69 |
-
|
70 |
-
return cls.from_dict(config_dict, **kwargs)
|
71 |
-
|
72 |
-
|
73 |
-
class DbrxFFNConfig(PretrainedConfig):
|
74 |
-
"""Configuration class for Dbrx FFN.
|
75 |
-
|
76 |
-
[`DbrxFFN`] class. It is used to instantiate feedforward layers according to
|
77 |
-
the specified arguments, defining the layers architecture.
|
78 |
-
|
79 |
-
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
|
80 |
-
documentation from [`PretrainedConfig`] for more information.
|
81 |
-
|
82 |
-
Args:
|
83 |
-
ffn_act_fn (dict, optional): A dict specifying activation function for the FFN.
|
84 |
-
The dict should have a key 'name' with the value being the name of
|
85 |
-
the activation function along with any additional keyword arguments.
|
86 |
-
ffn_hidden_size (int, optional): The hidden size of the feedforward network.
|
87 |
-
moe_num_experts (int, optional): The number of experts in the mixture of experts layer.
|
88 |
-
moe_top_k (int, optional): The number of experts to use in the mixture of experts layer.
|
89 |
-
moe_jitter_eps (float, optional): The jitter epsilon for the mixture of experts layer.
|
90 |
-
moe_loss_weight (float, optional): The loss weight for the mixture of experts layer.
|
91 |
-
moe_normalize_expert_weights (float, optional): The normalization factor for the expert weights.
|
92 |
-
uniform_expert_assignment (bool, optional): Whether to use uniform expert assignment.
|
93 |
-
This should only be used for benchmarking purposes.
|
94 |
-
"""
|
95 |
-
|
96 |
-
def __init__(
|
97 |
-
self,
|
98 |
-
ffn_act_fn: Optional[dict] = None,
|
99 |
-
ffn_hidden_size: int = 3584,
|
100 |
-
moe_num_experts: int = 4,
|
101 |
-
moe_top_k: int = 1,
|
102 |
-
moe_jitter_eps: Optional[float] = None,
|
103 |
-
moe_loss_weight: float = 0.01,
|
104 |
-
moe_normalize_expert_weights: Optional[float] = 1,
|
105 |
-
uniform_expert_assignment: bool = False,
|
106 |
-
**kwargs: Any,
|
107 |
-
):
|
108 |
-
super().__init__()
|
109 |
-
if ffn_act_fn is None:
|
110 |
-
ffn_act_fn = {'name': 'silu'}
|
111 |
-
self.ffn_act_fn = ffn_act_fn
|
112 |
-
self.ffn_hidden_size = ffn_hidden_size
|
113 |
-
self.moe_num_experts = moe_num_experts
|
114 |
-
self.moe_top_k = moe_top_k
|
115 |
-
self.moe_jitter_eps = moe_jitter_eps
|
116 |
-
self.moe_loss_weight = moe_loss_weight
|
117 |
-
self.moe_normalize_expert_weights = moe_normalize_expert_weights
|
118 |
-
self.uniform_expert_assignment = uniform_expert_assignment
|
119 |
-
|
120 |
-
for k in ['model_type']:
|
121 |
-
if k in kwargs:
|
122 |
-
kwargs.pop(k)
|
123 |
-
if len(kwargs) != 0:
|
124 |
-
raise ValueError(f'Found unknown {kwargs=}')
|
125 |
-
|
126 |
-
@classmethod
|
127 |
-
def from_pretrained(cls, pretrained_model_name_or_path: str,
|
128 |
-
**kwargs: Any) -> 'PretrainedConfig':
|
129 |
-
cls._set_token_in_kwargs(kwargs)
|
130 |
-
|
131 |
-
config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path,
|
132 |
-
**kwargs)
|
133 |
-
|
134 |
-
if config_dict.get('model_type') == 'dbrx':
|
135 |
-
config_dict = config_dict['ffn_config']
|
136 |
-
|
137 |
-
if 'model_type' in config_dict and hasattr(
|
138 |
-
cls,
|
139 |
-
'model_type') and config_dict['model_type'] != cls.model_type:
|
140 |
-
logger.warning(
|
141 |
-
f"You are using a model of type {config_dict['model_type']} to instantiate a model of type "
|
142 |
-
+
|
143 |
-
f'{cls.model_type}. This is not supported for all configurations of models and can yield errors.'
|
144 |
-
)
|
145 |
-
|
146 |
-
return cls.from_dict(config_dict, **kwargs)
|
147 |
-
|
148 |
-
|
149 |
-
class DbrxConfig(PretrainedConfig):
|
150 |
-
"""Configuration class for Dbrx.
|
151 |
-
|
152 |
-
[`DbrxModel`]. It is used to instantiate a Dbrx model according to the
|
153 |
-
specified arguments, defining the model architecture.
|
154 |
-
|
155 |
-
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
|
156 |
-
documentation from [`PretrainedConfig`] for more information.
|
157 |
-
|
158 |
-
|
159 |
-
Args:
|
160 |
-
d_model (`int`, *optional*, defaults to 6144):
|
161 |
-
Dimensionality of the embeddings and hidden states.
|
162 |
-
n_heads (`int`, *optional*, defaults to 48):
|
163 |
-
Number of attention heads for each attention layer in the Transformer encoder.
|
164 |
-
n_layers (`int`, *optional*, defaults to 40):
|
165 |
-
Number of hidden layers in the Transformer encoder.
|
166 |
-
max_seq_len (`int`, *optional*, defaults to 32768):
|
167 |
-
The maximum sequence length of the model.
|
168 |
-
vocab_size (`int`, *optional*, defaults to 100352):
|
169 |
-
Vocabulary size of the Dbrx model. Defines the maximum number of different tokens that can be represented by
|
170 |
-
the `inputs_ids` passed when calling [`DbrxModel`].
|
171 |
-
resid_pdrop (`float`, *optional*, defaults to 0.0):
|
172 |
-
The dropout probability applied to the attention output before combining with residual.
|
173 |
-
emb_pdrop (`float`, *optional*, defaults to 0.0):
|
174 |
-
The dropout probability for the embedding layer.
|
175 |
-
attn_config (`dict`, *optional*):
|
176 |
-
A dictionary used to configure the model's attention module.
|
177 |
-
ffn_config (`dict`, *optional*):
|
178 |
-
A dictionary used to configure the model's FFN module.
|
179 |
-
use_cache (`bool`, *optional*, defaults to `False`):
|
180 |
-
Whether or not the model should return the last key/values attentions (not used by all models).
|
181 |
-
initializer_range (`float`, *optional*, defaults to 0.02):
|
182 |
-
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
|
183 |
-
output_router_logits (`bool`, *optional*, defaults to `False`):
|
184 |
-
Whether or not the router logits should be returned by the model. Enabling this will also
|
185 |
-
allow the model to output the auxiliary loss. See [here]() for more details
|
186 |
-
router_aux_loss_coef (`float`, *optional*, defaults to 0.001):
|
187 |
-
The aux loss factor for the total loss.
|
188 |
-
|
189 |
-
|
190 |
-
Example:
|
191 |
-
```python
|
192 |
-
>>> from transformers import DbrxConfig, DbrxModel
|
193 |
-
|
194 |
-
>>> # Initializing a Dbrx configuration
|
195 |
-
>>> configuration = DbrxConfig()
|
196 |
-
|
197 |
-
>>> # Initializing a model (with random weights) from the configuration
|
198 |
-
>>> model = DbrxModel(configuration)
|
199 |
-
|
200 |
-
>>> # Accessing the model configuration
|
201 |
-
>>> configuration = model.config
|
202 |
-
```
|
203 |
-
"""
|
204 |
-
|
205 |
-
model_type = 'dbrx'
|
206 |
-
attribute_map = {
|
207 |
-
'num_attention_heads': 'n_heads',
|
208 |
-
'hidden_size': 'd_model',
|
209 |
-
'num_hidden_layers': 'n_layers',
|
210 |
-
'max_position_embeddings': 'max_seq_len'
|
211 |
-
}
|
212 |
-
|
213 |
-
def __init__(
|
214 |
-
self,
|
215 |
-
d_model: int = 2048,
|
216 |
-
n_heads: int = 16,
|
217 |
-
n_layers: int = 24,
|
218 |
-
max_seq_len: int = 2048,
|
219 |
-
vocab_size: int = 32000,
|
220 |
-
resid_pdrop: float = 0.0,
|
221 |
-
emb_pdrop: float = 0.0,
|
222 |
-
attn_config: Optional[DbrxAttentionConfig] = None,
|
223 |
-
ffn_config: Optional[DbrxFFNConfig] = None,
|
224 |
-
use_cache: bool = True,
|
225 |
-
initializer_range: float = 0.02,
|
226 |
-
output_router_logits: bool = False,
|
227 |
-
router_aux_loss_coef: float = 0.05,
|
228 |
-
**kwargs: Any,
|
229 |
-
):
|
230 |
-
if attn_config is None:
|
231 |
-
self.attn_config = DbrxAttentionConfig()
|
232 |
-
elif isinstance(attn_config, dict):
|
233 |
-
self.attn_config = DbrxAttentionConfig(**attn_config)
|
234 |
-
else:
|
235 |
-
self.attn_config = attn_config
|
236 |
-
|
237 |
-
if ffn_config is None:
|
238 |
-
self.ffn_config = DbrxFFNConfig()
|
239 |
-
elif isinstance(ffn_config, dict):
|
240 |
-
self.ffn_config = DbrxFFNConfig(**ffn_config)
|
241 |
-
else:
|
242 |
-
self.ffn_config = ffn_config
|
243 |
-
|
244 |
-
self.d_model = d_model
|
245 |
-
self.n_heads = n_heads
|
246 |
-
self.n_layers = n_layers
|
247 |
-
self.max_seq_len = max_seq_len
|
248 |
-
self.vocab_size = vocab_size
|
249 |
-
self.resid_pdrop = resid_pdrop
|
250 |
-
self.emb_pdrop = emb_pdrop
|
251 |
-
self.use_cache = use_cache
|
252 |
-
self.initializer_range = initializer_range
|
253 |
-
self.output_router_logits = output_router_logits
|
254 |
-
self.router_aux_loss_coef = router_aux_loss_coef
|
255 |
-
|
256 |
-
tie_word_embeddings = kwargs.pop('tie_word_embeddings', False)
|
257 |
-
if tie_word_embeddings:
|
258 |
-
raise ValueError(
|
259 |
-
'tie_word_embeddings is not supported for Dbrx models.')
|
260 |
-
|
261 |
-
super().__init__(
|
262 |
-
tie_word_embeddings=tie_word_embeddings,
|
263 |
-
**kwargs,
|
264 |
-
)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
generation_config.json
DELETED
@@ -1,4 +0,0 @@
|
|
1 |
-
{
|
2 |
-
"_from_model_config": true,
|
3 |
-
"transformers_version": "4.39.1"
|
4 |
-
}
|
|
|
|
|
|
|
|
|
|
model-00001-of-00054.safetensors
DELETED
@@ -1,3 +0,0 @@
|
|
1 |
-
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:256716f079f5c0bd5ad1802770effc34f768b48eac038e12bc809665e021d3ad
|
3 |
-
size 4976767128
|
|
|
|
|
|
|
|
model-00002-of-00054.safetensors
DELETED
@@ -1,3 +0,0 @@
|
|
1 |
-
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:cec7e604a175ab200fd4c953e4c43eec58ac938f802667d3fb984fba724c0bc0
|
3 |
-
size 4932728000
|
|
|
|
|
|
|
|
model-00003-of-00054.safetensors
DELETED
@@ -1,3 +0,0 @@
|
|
1 |
-
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:780ad346f7ad8aafce044eb92fb9818f12d5b2205539451ad18f3baa496d732c
|
3 |
-
size 4932728000
|
|
|
|
|
|
|
|
model-00004-of-00054.safetensors
DELETED
@@ -1,3 +0,0 @@
|
|
1 |
-
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:916c7963f24e43257e2c0a09c716af00c2c55c89e85f52f51b0c5f730365856a
|
3 |
-
size 4888466120
|
|
|
|
|
|
|
|
model-00005-of-00054.safetensors
DELETED
@@ -1,3 +0,0 @@
|
|
1 |
-
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:a34e4e22bbf72438b3ac92014ed91cf2d4634813baee624b228f752544a3f09a
|
3 |
-
size 4932728000
|
|
|
|
|
|
|
|
model-00006-of-00054.safetensors
DELETED
@@ -1,3 +0,0 @@
|
|
1 |
-
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:5f05ed49deefd02d70e5e9c4a4173f137cc986fa03f9bf6aee1fda039c548750
|
3 |
-
size 4932728000
|
|
|
|
|
|
|
|
model-00007-of-00054.safetensors
DELETED
@@ -1,3 +0,0 @@
|
|
1 |
-
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:3b1386385a92a5385ae44c1c036870ed923dbece891c0de4218407659b497eab
|
3 |
-
size 4932728000
|
|
|
|
|
|
|
|
model-00008-of-00054.safetensors
DELETED
@@ -1,3 +0,0 @@
|
|
1 |
-
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:fab37027f7d6373cec9c18976878b9d533b4e47fe66fbad53f2e4db0d8f6223b
|
3 |
-
size 4888466120
|
|
|
|
|
|
|
|
model-00009-of-00054.safetensors
DELETED
@@ -1,3 +0,0 @@
|
|
1 |
-
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:dcb0eab9ffab79baa71c37ea416585cb9e90d8aeaca599f2003cf489bc6452a7
|
3 |
-
size 4932728000
|
|
|
|
|
|
|
|
model-00010-of-00054.safetensors
DELETED
@@ -1,3 +0,0 @@
|
|
1 |
-
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:0aab871d7c593a79d20149e97829c902dfb7cfe6adfefe1bf0644484e97fcbc0
|
3 |
-
size 4932728000
|
|
|
|
|
|
|
|
model-00011-of-00054.safetensors
DELETED
@@ -1,3 +0,0 @@
|
|
1 |
-
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:1b8a096c8e629b0eb86a868883229bbf4a89e195aece10c73a6be2e22dfe0a3c
|
3 |
-
size 4932728000
|
|
|
|
|
|
|
|
model-00012-of-00054.safetensors
DELETED
@@ -1,3 +0,0 @@
|
|
1 |
-
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:6fc7256eeaa054f78dd5d7d1a9221b19a6464880f7ff270da15eaa7ee5427f1f
|
3 |
-
size 4888466120
|
|
|
|
|
|
|
|
model-00013-of-00054.safetensors
DELETED
@@ -1,3 +0,0 @@
|
|
1 |
-
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:6525d886e5ef63a3cc29ae3006b2233b02aad525f4597c6874cf4bc97d481942
|
3 |
-
size 4932728000
|
|
|
|
|
|
|
|
model-00014-of-00054.safetensors
DELETED
@@ -1,3 +0,0 @@
|
|
1 |
-
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:565489c611c6f9b0e8734c83626fa1e1364d308f1f8c18abd287608d5bb1a0ce
|
3 |
-
size 4932728024
|
|
|
|
|
|
|
|
model-00015-of-00054.safetensors
DELETED
@@ -1,3 +0,0 @@
|
|
1 |
-
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:b8b9079efa627c875aa3a61404233adab8578dc828f644bd25d8ea7b610e2e3c
|
3 |
-
size 4932728040
|
|
|
|
|
|
|
|
model-00016-of-00054.safetensors
DELETED
@@ -1,3 +0,0 @@
|
|
1 |
-
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:300a320e916721cb9dc2ae585f26abb9de24c180c03fe74c70335f8f85acf1be
|
3 |
-
size 4888466152
|
|
|
|
|
|
|
|
model-00017-of-00054.safetensors
DELETED
@@ -1,3 +0,0 @@
|
|
1 |
-
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:5518b1b249af5cbb6496061ba9aed82154921974d2139b3ff7a33e2186b5596f
|
3 |
-
size 4932728040
|
|
|
|
|
|
|
|
model-00018-of-00054.safetensors
DELETED
@@ -1,3 +0,0 @@
|
|
1 |
-
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:9b55d415b3a18c90bd68b7c00aa87e794d238b3a13ca68138e06715e9a161ef8
|
3 |
-
size 4932728040
|
|
|
|
|
|
|
|
model-00019-of-00054.safetensors
DELETED
@@ -1,3 +0,0 @@
|
|
1 |
-
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:2e3ba11da6bf1ee4068f3c6fee2f9b8063c414f9353edfb94e2f78efc9221e06
|
3 |
-
size 4932728040
|
|
|
|
|
|
|
|
model-00020-of-00054.safetensors
DELETED
@@ -1,3 +0,0 @@
|
|
1 |
-
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:c07b72ea6cf93123db53b9b6f9b48c4867c12d761d4873f3d9365e5cc22e79e5
|
3 |
-
size 4888466160
|
|
|
|
|
|
|
|
model-00021-of-00054.safetensors
DELETED
@@ -1,3 +0,0 @@
|
|
1 |
-
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:01e39755f242e7238e4f481096616a6a560c0bc9471e002c0a78ca86fff7ac45
|
3 |
-
size 4932728040
|
|
|
|
|
|
|
|
model-00022-of-00054.safetensors
DELETED
@@ -1,3 +0,0 @@
|
|
1 |
-
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:4dc082907e4d5a0516173c829c70a607b2f0d3261a8c5eb844b903b1b82fd0ac
|
3 |
-
size 4932728040
|
|
|
|
|
|
|
|
model-00023-of-00054.safetensors
DELETED
@@ -1,3 +0,0 @@
|
|
1 |
-
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:5b49d46cca7afa0f3813eae6ffcc1727b2f91f938b27e76f479179d6a0cfe906
|
3 |
-
size 4932728040
|
|
|
|
|
|
|
|
model-00024-of-00054.safetensors
DELETED
@@ -1,3 +0,0 @@
|
|
1 |
-
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:b850ee620553f1601ae0cb4bd26d989dbf0a9804f4d10526f7b23650d7df96d1
|
3 |
-
size 4888466160
|
|
|
|
|
|
|
|
model-00025-of-00054.safetensors
DELETED
@@ -1,3 +0,0 @@
|
|
1 |
-
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:8f67ed9a0a51d86066e9ba1ec356aa892aaefa399dcf802bdb1aacc4dc329777
|
3 |
-
size 4932728040
|
|
|
|
|
|
|
|
model-00026-of-00054.safetensors
DELETED
@@ -1,3 +0,0 @@
|
|
1 |
-
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:8e45081be8f94cadc09a1464e03ef936d425cad34b14f302de66ace6e4e3b960
|
3 |
-
size 4932728040
|
|
|
|
|
|
|
|
model-00027-of-00054.safetensors
DELETED
@@ -1,3 +0,0 @@
|
|
1 |
-
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:8313c6aa34520365dc287f4c77c5761221bcc039a8c4d39502c198462dc961f0
|
3 |
-
size 4932728040
|
|
|
|
|
|
|
|
model-00028-of-00054.safetensors
DELETED
@@ -1,3 +0,0 @@
|
|
1 |
-
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:3c79e026cc9dd009a65c3f6605dc3fd6c0949f7da5ff0305aeb2d1bc34eb3a6c
|
3 |
-
size 4888466160
|
|
|
|
|
|
|
|
model-00029-of-00054.safetensors
DELETED
@@ -1,3 +0,0 @@
|
|
1 |
-
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:ab6850f79189bbf9873933e4f826b455bf1bbd27e9a439726ad838cebaf228bd
|
3 |
-
size 4932728040
|
|
|
|
|
|
|
|
model-00030-of-00054.safetensors
DELETED
@@ -1,3 +0,0 @@
|
|
1 |
-
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:97402b437e965fb703389a1c3bee17692e37eb675a984065bfd692e7c528292d
|
3 |
-
size 4932728040
|
|
|
|
|
|
|
|
model-00031-of-00054.safetensors
DELETED
@@ -1,3 +0,0 @@
|
|
1 |
-
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:0fe692e2394ca35edc05ec08c061729583d92763cddcb32a60ca0a0fcf03f89e
|
3 |
-
size 4932728040
|
|
|
|
|
|
|
|
model-00032-of-00054.safetensors
DELETED
@@ -1,3 +0,0 @@
|
|
1 |
-
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:27a9bf5fb097f990c6aa75b3028cf7321f686d291b735588864c1af8550f74a9
|
3 |
-
size 4888466160
|
|
|
|
|
|
|
|
model-00033-of-00054.safetensors
DELETED
@@ -1,3 +0,0 @@
|
|
1 |
-
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:36e9ea2846b43776d628b5db57c871c2c2df1764d8b342505cdf5db339771905
|
3 |
-
size 4932728040
|
|
|
|
|
|
|
|
model-00034-of-00054.safetensors
DELETED
@@ -1,3 +0,0 @@
|
|
1 |
-
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:66d3e9fbf3f328a8957fa971f5871ab7b0b14a086b80558ef65c5dca1c3504f2
|
3 |
-
size 4932728040
|
|
|
|
|
|
|
|
model-00035-of-00054.safetensors
DELETED
@@ -1,3 +0,0 @@
|
|
1 |
-
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:55754a376564958f587084d0ee0efc029f8489ae9d3e228fa17395c8853abcb4
|
3 |
-
size 4932728040
|
|
|
|
|
|
|
|
model-00036-of-00054.safetensors
DELETED
@@ -1,3 +0,0 @@
|
|
1 |
-
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:e45a833cc8ff9c76a7a77d4e01cb3631f6564c454e752369423d4c72500e319a
|
3 |
-
size 4989142000
|
|
|
|
|
|
|
|
model-00037-of-00054.safetensors
DELETED
@@ -1,3 +0,0 @@
|
|
1 |
-
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:c5bec5d9cb289417cb18fe2173393e2bf27847a8c1bcc1ba876db1b4a5f2ba7f
|
3 |
-
size 4964172904
|
|
|
|
|
|
|
|
model-00038-of-00054.safetensors
DELETED
@@ -1,3 +0,0 @@
|
|
1 |
-
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:90e104fdcd526219b0cf559e5095005c7030141326118c0e4a1e8a80cf683157
|
3 |
-
size 4932728040
|
|
|
|
|
|
|
|
model-00039-of-00054.safetensors
DELETED
@@ -1,3 +0,0 @@
|
|
1 |
-
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:6fd67f98b3a5d66f5199081330581a69e59bd84d58e73baad9bc8c124bb5b97e
|
3 |
-
size 4932728040
|
|
|
|
|
|
|
|
model-00040-of-00054.safetensors
DELETED
@@ -1,3 +0,0 @@
|
|
1 |
-
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:a053ee9f4626f066ad2546f44217ece8b4c42f6ec8c6e032d50be5f884c0a5ed
|
3 |
-
size 4932728040
|
|
|
|
|
|
|
|
model-00041-of-00054.safetensors
DELETED
@@ -1,3 +0,0 @@
|
|
1 |
-
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:6b47d95b57c98c1ccbf9f13774fd415164e3140b42a4d87726f22a6ea198f531
|
3 |
-
size 4888466152
|
|
|
|
|
|
|
|
model-00042-of-00054.safetensors
DELETED
@@ -1,3 +0,0 @@
|
|
1 |
-
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:5e9cf4efc85946cc744eb042c544b9592a629450b480956080111a4f0b2aeb90
|
3 |
-
size 4932728040
|
|
|
|
|
|
|
|
model-00043-of-00054.safetensors
DELETED
@@ -1,3 +0,0 @@
|
|
1 |
-
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:d72776eb2ad47e33a22f1348d55236215860644ccfa70c2ed296177eaf053411
|
3 |
-
size 4932728040
|
|
|
|
|
|
|
|