mav23 commited on
Commit
817d686
1 Parent(s): 3c0bc7a

Upload folder using huggingface_hub

Browse files
Files changed (3) hide show
  1. .gitattributes +1 -0
  2. README.md +72 -0
  3. text2cypher-demo-16bit.Q4_0.gguf +3 -0
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ text2cypher-demo-16bit.Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,72 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ license: apache-2.0
5
+ tags:
6
+ - text-generation-inference
7
+ - transformers
8
+ - unsloth
9
+ - llama
10
+ - trl
11
+ - sft
12
+ base_model: unsloth/llama-3-8b-Instruct
13
+ datasets:
14
+ - tomasonjo/text2cypher-gpt4o-clean
15
+ ---
16
+
17
+ # Uploaded model
18
+
19
+ - **Developed by:** tomasonjo
20
+ - **License:** apache-2.0
21
+ - **Finetuned from model :** unsloth/llama-3-8b-Instruct
22
+
23
+ This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
24
+
25
+ **For more information visit [this link](https://github.com/neo4j-labs/text2cypher/tree/main/finetuning/unsloth-llama3#using-chat-prompt-template)**
26
+
27
+
28
+ ## Example usage:
29
+
30
+ Install dependencies. Check [Unsloth documentation](https://github.com/unslothai/unsloth) for specific installation for other environments.
31
+
32
+ ````python
33
+ %%capture
34
+ # Installs Unsloth, Xformers (Flash Attention) and all other packages!
35
+ !pip install "unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git"
36
+ !pip install --no-deps "xformers<0.0.26" trl peft accelerate bitsandbytes
37
+ ````
38
+
39
+ Then you can load the model and use it as inference
40
+
41
+ ```python
42
+ from unsloth.chat_templates import get_chat_template
43
+
44
+ tokenizer = get_chat_template(
45
+ tokenizer,
46
+ chat_template = "llama-3",
47
+ map_eos_token = True,
48
+ )
49
+
50
+ FastLanguageModel.for_inference(model) # Enable native 2x faster inference
51
+
52
+ schema = """Node properties: - **Question** - `favorites`: INTEGER Example: "0" - `answered`: BOOLEAN - `text`: STRING Example: "### This is: Bug ### Specifications OS: Win10" - `link`: STRING Example: "https://stackoverflow.com/questions/62224586/playg" - `createdAt`: DATE_TIME Min: 2020-06-05T16:57:19Z, Max: 2020-06-05T21:49:16Z - `title`: STRING Example: "Playground is not loading with apollo-server-lambd" - `id`: INTEGER Min: 62220505, Max: 62224586 - `upVotes`: INTEGER Example: "0" - `score`: INTEGER Example: "-1" - `downVotes`: INTEGER Example: "1" - **Tag** - `name`: STRING Example: "aws-lambda" - **User** - `image`: STRING Example: "https://lh3.googleusercontent.com/-NcFYSuXU0nk/AAA" - `link`: STRING Example: "https://stackoverflow.com/users/10251021/alexandre" - `id`: INTEGER Min: 751, Max: 13681006 - `reputation`: INTEGER Min: 1, Max: 420137 - `display_name`: STRING Example: "Alexandre Le" Relationship properties: The relationships: (:Question)-[:TAGGED]->(:Tag) (:User)-[:ASKED]->(:Question)"""
53
+ question = "Identify the top 5 questions with the most downVotes."
54
+
55
+ messages = [
56
+ {"role": "system", "content": "Given an input question, convert it to a Cypher query. No pre-amble."},
57
+ {"role": "user", "content": f"""Based on the Neo4j graph schema below, write a Cypher query that would answer the user's question:
58
+ {schema}
59
+
60
+ Question: {question}
61
+ Cypher query:"""}
62
+ ]
63
+ inputs = tokenizer.apply_chat_template(
64
+ messages,
65
+ tokenize = True,
66
+ add_generation_prompt = True, # Must add for generation
67
+ return_tensors = "pt",
68
+ ).to("cuda")
69
+
70
+ outputs = model.generate(input_ids = inputs, max_new_tokens = 128, use_cache = True)
71
+ tokenizer.batch_decode(outputs)
72
+ ```
text2cypher-demo-16bit.Q4_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2c5e27e989c2de6f46abf28dc14063df75ccbc6e6166c482c5798401995f88ef
3
+ size 4661212128