Datasets:
The dataset viewer is not available for this split.
Error code: FeaturesError Exception: ArrowTypeError Message: ("Expected bytes, got a 'list' object", 'Conversion failed for column abdominal_surgeon_edge_inv_246 with type object') Traceback: Traceback (most recent call last): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 137, in _generate_tables pa_table = paj.read_json( File "pyarrow/_json.pyx", line 308, in pyarrow._json.read_json File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: JSON parse error: Column() changed from object to string in row 0 During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 231, in compute_first_rows_from_streaming_response iterable_dataset = iterable_dataset._resolve_features() File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2831, in _resolve_features features = _infer_features_from_batch(self.with_format(None)._head()) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1845, in _head return _examples_to_batch(list(self.take(n))) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2012, in __iter__ for key, example in ex_iterable: File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1507, in __iter__ for key_example in islice(self.ex_iterable, self.n - ex_iterable_num_taken): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 268, in __iter__ for key, pa_table in self.generate_tables_fn(**gen_kwags): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 167, in _generate_tables pa_table = pa.Table.from_pandas(df, preserve_index=False) File "pyarrow/table.pxi", line 3874, in pyarrow.lib.Table.from_pandas File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/pandas_compat.py", line 611, in dataframe_to_arrays arrays = [convert_column(c, f) File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/pandas_compat.py", line 611, in <listcomp> arrays = [convert_column(c, f) File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/pandas_compat.py", line 598, in convert_column raise e File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/pandas_compat.py", line 592, in convert_column result = pa.array(col, type=type_, from_pandas=True, safe=safe) File "pyarrow/array.pxi", line 339, in pyarrow.lib.array File "pyarrow/array.pxi", line 85, in pyarrow.lib._ndarray_to_array File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status pyarrow.lib.ArrowTypeError: ("Expected bytes, got a 'list' object", 'Conversion failed for column abdominal_surgeon_edge_inv_246 with type object')
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
What it is:
Each dataset in this delivery is made up of query clusters that test an aspect of the consistency of the LLM knowledge about a particular domain. All the questions in each cluster are meant to be answered either 'yes' or 'no'. When the answers vary within a cluster, the knowledge is said to be inconsistent. When all the questions in a cluster are answered 'no' when the expected answer is 'yes' (or viceversa), the knowledge is said to be 'incomplete' (i.e., maybe the LLM wasn't trained in that particular domain). It is our experience that incomplete clusters are very few (less than 3%) meaning that the LLMs we have tested know about the domains included here (see below for a list of the individual datasets), as opposed to inconsistent clusters, which can be between 6%-20% of the total clusters.
The image below indicates the types of edges the query clusters are supposed to test. It is worth noting that these correspond to common sense axioms about conceptualization, like the fact that subConceptOf is transitive (4) or that subconcepts inherit the properties of their parent concepts (5). These axioms are listed in the accompanying paper (see below)
How it is made:
The questions and clusters are automatically generated from a knowledge graph from seed concepts and properties. In our case, we have used Wikidata, a well known knowledge graph. The result is an RDF/OWL subgraph that can be queried and reasoned over using Semantic Web technology. The figure below summarizes the steps used. The last two steps refer to a possible use case for this dataset, including using in-context learning to improve the performance of the dataset.
Types of query clusters
There are different types of query clusters depending on what aspect of the knowledge graph and its deductive closure they capture:
Edge clusters test a single edge using different questions. For example, to test the edge ('orthopedic pediatric surgeon', IsA, 'orthopedic surgeon), the positive or 'edge_yes' (expected answer is 'yes') cluster is:
"is 'orthopedic pediatric surgeon' a subconcept of 'orthopedic surgeon' ?",
"is 'orthopedic pediatric surgeon' a type of 'orthopedic surgeon' ?",
"is every kind of 'orthopedic pediatric surgeon' also a kind of 'orthopedic surgeon' ?",
"is 'orthopedic pediatric surgeon' a subcategory of 'orthopedic surgeon' ?"
There are also inverse edge clusters (with questions like "is 'orthopedic surgeon' a subconcept of 'orthopedic pediatric surgeon' ?") and negative or 'edge_no' clusters (with questions like "is 'orthopedic pediatric surgeon' a subconcept of 'dermatologist' ?")
Hierarchy clusters measure the consistency of a given path, including n-hop virtual edges (in graph's the deductive closure). For example, the path ('orthopedic surgeon', 'surgeon', 'medical specialist', 'medical occupation') is tested by the cluster below
"is 'orthopedic surgeon' a subconcept of 'surgeon' ?",
"is 'orthopedic surgeon' a type of 'surgeon' ?",
"is every kind of 'orthopedic surgeon' also a kind of 'surgeon' ?",
"is 'orthopedic surgeon' a subcategory of 'surgeon' ?",
"is 'orthopedic surgeon' a subconcept of 'medical specialist' ?",
"is 'orthopedic surgeon' a type of 'medical specialist' ?",
"is every kind of 'orthopedic surgeon' also a kind of 'medical specialist' ?",
"is 'orthopedic surgeon' a subcategory of 'medical specialist' ?",
"is 'orthopedic surgeon' a subconcept of 'medical_occupation' ?",
"is 'orthopedic surgeon' a type of 'medical_occupation' ?",
"is every kind of 'orthopedic surgeon' also a kind of 'medical_occupation' ?",
"is 'orthopedic surgeon' a subcategory of 'medical_occupation' ?"
Property inheritance clusters test the most basic property of conceptualization. If an orthopedic surgeon is a type of surgeon, we expect that all the properties of surgeons, e.g., having to be board certified, having attended medical school or working on the field of surgery, are inherited by orthopedic surgeons. The example below tests the later:
"is 'orthopedic surgeon' a subconcept of 'surgeon' ?",
"is 'orthopedic surgeon' a type of 'surgeon' ?",
"is every kind of 'orthopedic surgeon' also a kind of 'surgeon' ?",
"is 'orthopedic surgeon' a subcategory of 'surgeon' ?",
"is the following statement true? 'orthopedic surgeon works on the field of surgery' ",
"is the following statement true? 'surgeon works on the field of surgery' ",
"is it accurate to say that 'orthopedic surgeon works on the field of surgery'? ",
"is it accurate to say that 'surgeon works on the field of surgery'? "
List of datasets
To show the versatility of our approach, we have constructed similar datasets in the domains below. We test one property inheritance per dataset. The Wikidata main QNode (the node corresponding to the entities) and PNode (the node corresponding to the property) are indicated in parenthesis.
The size and configuration of the datasets is listed below
domain | edges_yes | edges_no | edges_in | hierarchies | property hierarchies |
---|---|---|---|---|---|
Academic Disciplines | 52 | 308 | 52 | 30 | 1 |
Dishes | 225 | 521 | 224 | 72 | 178 |
Financial product | 112 | 433 | 108 | 40 | 32 |
Home appliances | 58 | 261 | 58 | 31 | 13 |
Medical specialties | 122 | 386 | 114 | 55 | 63 |
Music genres | 490 | 807 | 488 | 212 | 139 |
Natural disasters | 45 | 225 | 44 | 21 | 22 |
Software | 80 | 572 | 79 | 114 | 4 |
Want to know more?
For background and motivation on this dataset, please check https://arxiv.org/abs/2405.20163 Also to be published in COLM 2024,
@inproceedings{Uceda_2024_1,
title={Reasoning about concepts with LLMs: Inconsistencies abound},
author={Rosario Uceda Sosa and Karthikeyan Natesan Ramamurthy and Maria Chang and Moninder Singh},
booktitle={Proc.\ 1st Conference on Language Modeling (COLM 24)},
year={2024}
}
Questions? Comments?
Please contact rosariou@us.ibm.com, knatesa@us.ibm.com, Maria.Chang@ibm.com or moninder@us.ibm.com
- Downloads last month
- 53