Revert in-library commit
#65
by
Rocketknight1
HF staff
- opened
- README.md +4 -5
- configuration_falcon.py +0 -5
- generation_config.json +1 -1
README.md
CHANGED
@@ -5,7 +5,6 @@ language:
|
|
5 |
- en
|
6 |
inference: false
|
7 |
license: apache-2.0
|
8 |
-
new_version: tiiuae/falcon-11B
|
9 |
---
|
10 |
|
11 |
# ๐ Falcon-7B
|
@@ -23,6 +22,8 @@ new_version: tiiuae/falcon-11B
|
|
23 |
* **It features an architecture optimized for inference**, with FlashAttention ([Dao et al., 2022](https://arxiv.org/abs/2205.14135)) and multiquery ([Shazeer et al., 2019](https://arxiv.org/abs/1911.02150)).
|
24 |
* **It is made available under a permissive Apache 2.0 license allowing for commercial use**, without any royalties or restrictions.
|
25 |
|
|
|
|
|
26 |
โ ๏ธ **This is a raw, pretrained model, which should be further finetuned for most usecases.** If you are looking for a version better suited to taking generic instructions in a chat format, we recommend taking a look at [Falcon-7B-Instruct](https://huggingface.co/tiiuae/falcon-7b-instruct).
|
27 |
|
28 |
๐ฅ **Looking for an even more powerful model?** [Falcon-40B](https://huggingface.co/tiiuae/falcon-40b) is Falcon-7B's big brother!
|
@@ -40,7 +41,6 @@ pipeline = transformers.pipeline(
|
|
40 |
model=model,
|
41 |
tokenizer=tokenizer,
|
42 |
torch_dtype=torch.bfloat16,
|
43 |
-
trust_remote_code=True,
|
44 |
device_map="auto",
|
45 |
)
|
46 |
sequences = pipeline(
|
@@ -70,7 +70,7 @@ You will need **at least 16GB of memory** to swiftly run inference with Falcon-7
|
|
70 |
|
71 |
- **Developed by:** [https://www.tii.ae](https://www.tii.ae);
|
72 |
- **Model type:** Causal decoder-only;
|
73 |
-
- **Language(s) (NLP):** English
|
74 |
- **License:** Apache 2.0.
|
75 |
|
76 |
### Model Source
|
@@ -111,7 +111,6 @@ pipeline = transformers.pipeline(
|
|
111 |
model=model,
|
112 |
tokenizer=tokenizer,
|
113 |
torch_dtype=torch.bfloat16,
|
114 |
-
trust_remote_code=True,
|
115 |
device_map="auto",
|
116 |
)
|
117 |
sequences = pipeline(
|
@@ -234,4 +233,4 @@ To learn more about the pretraining dataset, see the ๐ [RefinedWeb paper](htt
|
|
234 |
Falcon-7B is made available under the Apache 2.0 license.
|
235 |
|
236 |
## Contact
|
237 |
-
falconllm@tii.ae
|
|
|
5 |
- en
|
6 |
inference: false
|
7 |
license: apache-2.0
|
|
|
8 |
---
|
9 |
|
10 |
# ๐ Falcon-7B
|
|
|
22 |
* **It features an architecture optimized for inference**, with FlashAttention ([Dao et al., 2022](https://arxiv.org/abs/2205.14135)) and multiquery ([Shazeer et al., 2019](https://arxiv.org/abs/1911.02150)).
|
23 |
* **It is made available under a permissive Apache 2.0 license allowing for commercial use**, without any royalties or restrictions.
|
24 |
|
25 |
+
โ ๏ธ Falcon is now available as a core model in the `transformers` library! To use the in-library version, please install the latest version of `transformers` with `pip install git+https://github.com/ huggingface/transformers.git`, then simply remove the `trust_remote_code=True` argument from `from_pretrained()`.
|
26 |
+
|
27 |
โ ๏ธ **This is a raw, pretrained model, which should be further finetuned for most usecases.** If you are looking for a version better suited to taking generic instructions in a chat format, we recommend taking a look at [Falcon-7B-Instruct](https://huggingface.co/tiiuae/falcon-7b-instruct).
|
28 |
|
29 |
๐ฅ **Looking for an even more powerful model?** [Falcon-40B](https://huggingface.co/tiiuae/falcon-40b) is Falcon-7B's big brother!
|
|
|
41 |
model=model,
|
42 |
tokenizer=tokenizer,
|
43 |
torch_dtype=torch.bfloat16,
|
|
|
44 |
device_map="auto",
|
45 |
)
|
46 |
sequences = pipeline(
|
|
|
70 |
|
71 |
- **Developed by:** [https://www.tii.ae](https://www.tii.ae);
|
72 |
- **Model type:** Causal decoder-only;
|
73 |
+
- **Language(s) (NLP):** English and French;
|
74 |
- **License:** Apache 2.0.
|
75 |
|
76 |
### Model Source
|
|
|
111 |
model=model,
|
112 |
tokenizer=tokenizer,
|
113 |
torch_dtype=torch.bfloat16,
|
|
|
114 |
device_map="auto",
|
115 |
)
|
116 |
sequences = pipeline(
|
|
|
233 |
Falcon-7B is made available under the Apache 2.0 license.
|
234 |
|
235 |
## Contact
|
236 |
+
falconllm@tii.ae
|
configuration_falcon.py
CHANGED
@@ -115,11 +115,6 @@ class FalconConfig(PretrainedConfig):
|
|
115 |
eos_token_id=11,
|
116 |
**kwargs,
|
117 |
):
|
118 |
-
logger.warning_once(
|
119 |
-
"\nWARNING: You are currently loading Falcon using legacy code contained in the model repository. Falcon has now been fully ported into the Hugging Face transformers library. "
|
120 |
-
"For the most up-to-date and high-performance version of the Falcon model code, please update to the latest version of transformers and then load the model "
|
121 |
-
"without the trust_remote_code=True argument.\n"
|
122 |
-
)
|
123 |
self.vocab_size = vocab_size
|
124 |
# Backward compatibility with n_embed kwarg
|
125 |
n_embed = kwargs.pop("n_embed", None)
|
|
|
115 |
eos_token_id=11,
|
116 |
**kwargs,
|
117 |
):
|
|
|
|
|
|
|
|
|
|
|
118 |
self.vocab_size = vocab_size
|
119 |
# Backward compatibility with n_embed kwarg
|
120 |
n_embed = kwargs.pop("n_embed", None)
|
generation_config.json
CHANGED
@@ -2,5 +2,5 @@
|
|
2 |
"_from_model_config": true,
|
3 |
"bos_token_id": 11,
|
4 |
"eos_token_id": 11,
|
5 |
-
"transformers_version": "4.
|
6 |
}
|
|
|
2 |
"_from_model_config": true,
|
3 |
"bos_token_id": 11,
|
4 |
"eos_token_id": 11,
|
5 |
+
"transformers_version": "4.31.0.dev0"
|
6 |
}
|