LoneStriker
commited on
Commit
•
d40723f
1
Parent(s):
7a59613
Upload folder using huggingface_hub
Browse files- .gitattributes +0 -4
- README.md +4 -12
- gemma-7b-it-Q3_K_L.gguf +2 -2
- gemma-7b-it-Q4_K_M.gguf +2 -2
- gemma-7b-it-Q5_K_M.gguf +2 -2
.gitattributes
CHANGED
@@ -1,9 +1,5 @@
|
|
1 |
gemma-7b-it-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
|
2 |
-
gemma-7b-it-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
3 |
-
gemma-7b-it-Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
4 |
gemma-7b-it-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
5 |
-
gemma-7b-it-Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
6 |
gemma-7b-it-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
7 |
-
gemma-7b-it-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
8 |
gemma-7b-it-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
|
9 |
gemma-7b-it-Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
|
|
|
1 |
gemma-7b-it-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
2 |
gemma-7b-it-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
|
|
3 |
gemma-7b-it-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
|
|
4 |
gemma-7b-it-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
|
5 |
gemma-7b-it-Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
|
README.md
CHANGED
@@ -1,14 +1,6 @@
|
|
1 |
---
|
2 |
library_name: transformers
|
3 |
tags: []
|
4 |
-
widget:
|
5 |
-
- text: |
|
6 |
-
<start_of_turn>user
|
7 |
-
How does the brain work?<end_of_turn>
|
8 |
-
<start_of_turn>model
|
9 |
-
inference:
|
10 |
-
parameters:
|
11 |
-
max_new_tokens: 200
|
12 |
extra_gated_heading: "Access Gemma on Hugging Face"
|
13 |
extra_gated_prompt: "To access Gemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged-in to Hugging Face and click below. Requests are processed immediately."
|
14 |
extra_gated_button_content: "Acknowledge license"
|
@@ -27,7 +19,7 @@ This model card corresponds to the 7B instruct version of the Gemma model. You c
|
|
27 |
|
28 |
* [Responsible Generative AI Toolkit](https://ai.google.dev/responsible)
|
29 |
* [Gemma on Kaggle](https://www.kaggle.com/models/google/gemma)
|
30 |
-
* [Gemma on Vertex Model Garden](https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/335
|
31 |
|
32 |
**Terms of Use**: [Terms](https://www.kaggle.com/models/google/gemma/license/consent)
|
33 |
|
@@ -73,9 +65,9 @@ tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b-it")
|
|
73 |
model = AutoModelForCausalLM.from_pretrained("google/gemma-7b-it")
|
74 |
|
75 |
input_text = "Write me a poem about Machine Learning."
|
76 |
-
input_ids = tokenizer(input_text, return_tensors="pt")
|
77 |
|
78 |
-
outputs = model.generate(
|
79 |
print(tokenizer.decode(outputs[0]))
|
80 |
```
|
81 |
|
@@ -309,7 +301,7 @@ several advantages in this domain:
|
|
309 |
|
310 |
### Software
|
311 |
|
312 |
-
Training was done using [JAX](https://github.com/google/jax) and [ML Pathways](https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture).
|
313 |
|
314 |
JAX allows researchers to take advantage of the latest generation of hardware,
|
315 |
including TPUs, for faster and more efficient training of large models.
|
|
|
1 |
---
|
2 |
library_name: transformers
|
3 |
tags: []
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
4 |
extra_gated_heading: "Access Gemma on Hugging Face"
|
5 |
extra_gated_prompt: "To access Gemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged-in to Hugging Face and click below. Requests are processed immediately."
|
6 |
extra_gated_button_content: "Acknowledge license"
|
|
|
19 |
|
20 |
* [Responsible Generative AI Toolkit](https://ai.google.dev/responsible)
|
21 |
* [Gemma on Kaggle](https://www.kaggle.com/models/google/gemma)
|
22 |
+
* [Gemma on Vertex Model Garden](https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/335)
|
23 |
|
24 |
**Terms of Use**: [Terms](https://www.kaggle.com/models/google/gemma/license/consent)
|
25 |
|
|
|
65 |
model = AutoModelForCausalLM.from_pretrained("google/gemma-7b-it")
|
66 |
|
67 |
input_text = "Write me a poem about Machine Learning."
|
68 |
+
input_ids = tokenizer(**input_text, return_tensors="pt")
|
69 |
|
70 |
+
outputs = model.generate(input_ids)
|
71 |
print(tokenizer.decode(outputs[0]))
|
72 |
```
|
73 |
|
|
|
301 |
|
302 |
### Software
|
303 |
|
304 |
+
Training was done using [JAX](https://github.com/google/jax) and [ML Pathways](https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/ml-pathways).
|
305 |
|
306 |
JAX allows researchers to take advantage of the latest generation of hardware,
|
307 |
including TPUs, for faster and more efficient training of large models.
|
gemma-7b-it-Q3_K_L.gguf
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:4e817f0fc6f7c421cb314a9a19c9ed3b0f5474cd800b055b20566988c8496dc6
|
3 |
+
size 4709393568
|
gemma-7b-it-Q4_K_M.gguf
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:c92ce72d07ba4f92b46a50f0e7e04d30ba4700dc49d34f4750e9dd366fbbecca
|
3 |
+
size 5330085024
|
gemma-7b-it-Q5_K_M.gguf
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:ec8a69c31575b174416d84adc169e8b5969bbe391d52513abfbe8feaa993c001
|
3 |
+
size 6144828576
|