Upload README.md with huggingface_hub
Browse files
README.md
ADDED
@@ -0,0 +1,68 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
datasets:
|
3 |
+
- weaviate/WeaviateGraphQLGorilla-RandomSplit-Train
|
4 |
+
---
|
5 |
+
## Dataset
|
6 |
+
Finetuned on: https://huggingface.co/datasets/weaviate/WeaviateGraphQLGorilla-RandomSplit-Train
|
7 |
+
|
8 |
+
## Prompt template
|
9 |
+
```
|
10 |
+
## Instruction
|
11 |
+
Your task is to write GraphQL for the Natural Language Query provided. Use the provided API reference and Schema to generate the GraphQL. The GraphQL should be valid for Weaviate.
|
12 |
+
|
13 |
+
Only use the API reference to understand the syntax of the request.
|
14 |
+
|
15 |
+
## Natural Language Query
|
16 |
+
{nlcommand}
|
17 |
+
|
18 |
+
## Schema
|
19 |
+
{schema}
|
20 |
+
|
21 |
+
## API reference
|
22 |
+
{apiRef}
|
23 |
+
|
24 |
+
## Answer
|
25 |
+
{output}
|
26 |
+
```
|
27 |
+
|
28 |
+
## Example usage
|
29 |
+
```python
|
30 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
31 |
+
import torch
|
32 |
+
|
33 |
+
model_id = "substratusai/weaviate-gorilla-v4-random-split"
|
34 |
+
|
35 |
+
model = AutoModelForCausalLM.from_pretrained(
|
36 |
+
model_id,
|
37 |
+
load_in_4bit=True,
|
38 |
+
device_map='auto',
|
39 |
+
)
|
40 |
+
|
41 |
+
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
42 |
+
|
43 |
+
text = """
|
44 |
+
## Instruction
|
45 |
+
Your task is to write GraphQL for the Natural Language Query provided. Use the provided API reference and Schema to generate the GraphQL. The GraphQL should be valid for Weaviate.
|
46 |
+
|
47 |
+
Only use the API reference to understand the syntax of the request.
|
48 |
+
|
49 |
+
## Natural Language Query
|
50 |
+
```text Get me the top 10 historical events related to 'World War II', and show the event name, description, year, significant impact, and the names and populations of the involved countries. ```
|
51 |
+
|
52 |
+
## Schema
|
53 |
+
{ "classes": [ { "class": "HistoricalEvent", "description": "Information about historical events", "vectorIndexType": "hnsw", "vectorizer": "text2vec-transformers", "properties": [ { "name": "eventName", "dataType": ["text"], "description": "Name of the historical event" }, { "name": "description", "dataType": ["text"], "description": "Detailed description of the event" }, { "name": "year", "dataType": ["int"], "description": "Year the event occurred" }, { "name": "hadSignificantImpact", "dataType": ["boolean"], "description": "Whether the event had a significant impact" }, { "name": "involvedCountries", "dataType": ["Country"], "description": "Countries involved in the event" }{ "class": "Country", "description": "Information about countries", "vectorIndexType": "hnsw", "vectorizer": "text2vec-transformers", "properties": [ { "name": "countryName", "dataType": ["text"], "description": "Name of the country" }, { "name": "population", "dataType": ["int"], "description": "Population of the country" }}}
|
54 |
+
|
55 |
+
## API reference
|
56 |
+
1. Limit BM25 search results Limit the results[] You can limit the number of results returned by a `bm25` search, - to a fixed number, using the `limit: <N>` operator - to the first N "drops" in `score`, using the `autocut` operator `autocut` can be combined with `limit: N`, which would limit autocut's input to the first `N` objects. Limiting the number of results Use the `limit` argument to specify the maximum number of results that should be returned: ```graphql { Get { JeopardyQuestion( bm25: { query: "safety" }, limit: 3 ) { question answer _additional { score } } } } ```
|
57 |
+
|
58 |
+
## Answer
|
59 |
+
```graphql
|
60 |
+
"""
|
61 |
+
device = "cuda:0"
|
62 |
+
|
63 |
+
inputs = tokenizer(text, return_tensors="pt").to(device)
|
64 |
+
# this was needed due to a issue with model not taking token_type_ids
|
65 |
+
# inputs.pop("token_type_ids")
|
66 |
+
outputs = model.generate(**inputs, max_new_tokens=300)
|
67 |
+
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
68 |
+
```
|