File size: 1,613 Bytes
8eca1aa
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
---
language:
- en
license:
- cc-by-4.0
source_datasets:
- spider
pretty_name: Spider Context Validation Schema Ranked
tags:
- text-to-sql
- SQL
- spider
- validation
- eval
- spider-eval
dataset_info:
  features:
    - name: db_id
      dtype: string
    - name: prompt
      dtype: string
    - name: ground_truth
      dtype: string
---

# Dataset Card for Spider Context Validation

### Ranked Schema by ChatGPT

The database context used here is generated from ChatGPT after telling it to reorder the schema with the most relevant columns in the beginning of the db_info.

### Dataset Summary

Spider is a large-scale complex and cross-domain semantic parsing and text-to-SQL dataset annotated by 11 Yale students
The goal of the Spider challenge is to develop natural language interfaces to cross-domain databases.

This dataset was created to validate spider-fine-tuned LLMs with database context.

### Yale Lily Spider Leaderboards

The leaderboard can be seen at https://yale-lily.github.io/spider

### Languages

The text in the dataset is in English.

### Licensing Information

The spider dataset is licensed under 
the [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/legalcode)

### Citation
```
@article{yu2018spider,
  title={Spider: A large-scale human-labeled dataset for complex and cross-domain semantic parsing and text-to-sql task},
  author={Yu, Tao and Zhang, Rui and Yang, Kai and Yasunaga, Michihiro and Wang, Dongxu and Li, Zifan and Ma, James and Li, Irene and Yao, Qingning and Roman, Shanelle and others},
  journal={arXiv preprint arXiv:1809.08887},
  year={2018}
}
```