detakarang commited on
Commit
8d9e063
1 Parent(s): bea3efa

update first readme

Browse files
Files changed (1) hide show
  1. README.md +49 -0
README.md CHANGED
@@ -1,3 +1,52 @@
1
  ---
2
  license: cc-by-4.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: cc-by-4.0
3
+ task_categories:
4
+ - text-generation
5
+ - question-answering
6
+ - table-question-answering
7
+ language:
8
+ - id
9
+ tags:
10
+ - SQL
11
+ - code
12
+ - NLP
13
+ - text-to-sql
14
+ - context-sql
15
+ - spider
16
+ - wikisql
17
+ - sqlglot
18
+ pretty_name: sql-create-context-id
19
+ size_categories:
20
+ - 10K<n<100K
21
  ---
22
+ #### Overview
23
+ This dataset is a fork from [sql-create-context](https://huggingface.co/datasets/b-mc2/sql-create-context) <br>
24
+ This dataset builds from [WikiSQL](https://huggingface.co/datasets/wikisql) and [Spider](https://huggingface.co/datasets/spider).
25
+
26
+ There are 78,577 examples of natural language queries, SQL CREATE TABLE statements, and SQL Query answering the question using the CREATE statement as context. This dataset was built with text-to-sql LLMs in mind, intending to prevent hallucination of column and table names often seen when trained on text-to-sql datasets. The CREATE TABLE statement can often be copy and pasted from different DBMS and provides table names, column names and their data types. By providing just the CREATE TABLE statement as context, we can hopefully provide better grounding for models without having to provide actual rows of data, limiting token usage and exposure to private, sensitive, or proprietary data.
27
+
28
+ #### Cleansing and Augmentation
29
+ Cleansing and data augmentation has been done on the combined WikiSQL and Spider data. I used [SQLGlot](https://github.com/tobymao/sqlglot) on queries from Spider and WikiSQL and parsed them into different tables and columns, I then inferred column data types based on usage of `>` `<` operators as well as the use of `MIN()` `MAX()` `AVG()` `SUM()` on columns. While this isn't perfect, it increases the likelihood of inferring the correct datatype for a column, the columns otherwise default to VARCHAR type. These tables and columns are then used to generate CREATE TABLE statements using the inferred types. SQLGlot is used again to ensure both the SQL queries and CREATE TABLE statements parse without errors.
30
+
31
+ Some queries that do not have column names, e.g. SELECT * FROM table, have a default Id column added to the CREATE TABLE statement. Some other queries which use the generic `table` as the FROM table have instead been changed to a variation of `table_name_1` or some other number which is also reflected in the CREATE TABLE statement.
32
+
33
+ #### TODO
34
+ - Further augment the data by converting queries and CREATE TABLE statements into different SQL dialects, this can be done with SQLGlot. Reference to the dialect might also be added to the question.
35
+ - Support other informative contexts beyond CREATE TABLE
36
+ - Better parse datatypes to clean up things like numbers for column names and other numbers as strings
37
+
38
+ If you have any edits you'd like to see in a version 2 of this dataset, let me know.
39
+
40
+ Random sample:
41
+ ```json
42
+ {
43
+ "question": "Please show the themes of competitions with host cities having populations larger than 1000.",
44
+ "context": "CREATE TABLE city (City_ID VARCHAR, Population INTEGER); CREATE TABLE farm_competition (Theme VARCHAR, Host_city_ID VARCHAR)",
45
+ "answer": "SELECT T2.Theme FROM city AS T1 JOIN farm_competition AS T2 ON T1.City_ID = T2.Host_city_ID WHERE T1.Population > 1000"
46
+ },
47
+ {
48
+ "question": "Please show the different statuses of cities and the average population of cities with each status.",
49
+ "context": "CREATE TABLE city (Status VARCHAR, Population INTEGER)",
50
+ "answer": "SELECT Status, AVG(Population) FROM city GROUP BY Status"
51
+ },
52
+ ```