[WIP] Upload folder using huggingface_hub (multi-commit 11ebf3f26b533cc5c2874079bd928919f2ba27bbf1af47f2d7c839e6ae00f89f)

#1
by david4096 - opened
Files changed (1) hide show
  1. README.md +0 -44
README.md DELETED
@@ -1,44 +0,0 @@
1
- ---
2
- library_name: transformers
3
- license: other
4
- license_name: gemma-terms-of-use
5
- license_link: https://ai.google.dev/gemma/terms
6
- tags:
7
- - mlx
8
- - mlx
9
- widget:
10
- - text: '<start_of_turn>user
11
-
12
- How does the brain work?<end_of_turn>
13
-
14
- <start_of_turn>model
15
-
16
- '
17
- inference:
18
- parameters:
19
- max_new_tokens: 200
20
- extra_gated_heading: Access Gemma on Hugging Face
21
- extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review and
22
- agree to Google’s usage license. To do this, please ensure you’re logged-in to Hugging
23
- Face and click below. Requests are processed immediately.
24
- extra_gated_button_content: Acknowledge license
25
- ---
26
-
27
- # david4096/gemma-2b-finetune-graph-genome
28
-
29
- The Model [david4096/gemma-2b-finetune-graph-genome](https://huggingface.co/david4096/gemma-2b-finetune-graph-genome) was converted to MLX format from [mlx-community/quantized-gemma-2b-it](https://huggingface.co/mlx-community/quantized-gemma-2b-it) using mlx-lm version **0.16.0**.
30
-
31
- It was trained on text from about 670 papers with the phrase "graph genome" in them from biorxiv.
32
-
33
- ## Use with mlx
34
-
35
- ```bash
36
- pip install mlx-lm
37
- ```
38
-
39
- ```python
40
- from mlx_lm import load, generate
41
-
42
- model, tokenizer = load("david4096/gemma-2b-finetune-graph-genome-lg")
43
- response = generate(model, tokenizer, prompt="hello", verbose=True)
44
- ```