The dataset viewer is not available for this split.
Error code: FeaturesError Exception: ArrowInvalid Message: JSON parse error: Column(/clusters/[]/mentions/[]/[]) changed from number to object in row 0 Traceback: Traceback (most recent call last): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 145, in _generate_tables dataset = json.load(f) File "/usr/local/lib/python3.9/json/__init__.py", line 293, in load return loads(fp.read(), File "/usr/local/lib/python3.9/json/__init__.py", line 346, in loads return _default_decoder.decode(s) File "/usr/local/lib/python3.9/json/decoder.py", line 340, in decode raise JSONDecodeError("Extra data", s, end) json.decoder.JSONDecodeError: Extra data: line 2 column 1 (char 4348) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 240, in compute_first_rows_from_streaming_response iterable_dataset = iterable_dataset._resolve_features() File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2216, in _resolve_features features = _infer_features_from_batch(self.with_format(None)._head()) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1239, in _head return _examples_to_batch(list(self.take(n))) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1389, in __iter__ for key, example in ex_iterable: File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1044, in __iter__ yield from islice(self.ex_iterable, self.n) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 282, in __iter__ for key, pa_table in self.generate_tables_fn(**self.kwargs): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 148, in _generate_tables raise e File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 122, in _generate_tables pa_table = paj.read_json( File "pyarrow/_json.pyx", line 308, in pyarrow._json.read_json File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: JSON parse error: Column(/clusters/[]/mentions/[]/[]) changed from number to object in row 0
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
license: cc-by-4.0 language:
- he ---# Coreference Project by
DDRND (Mafat) as part of the Israeli national NLP program (see our GitHub at https://nnlp-il.mafat.ai/#Our-Github) and the Israeli Association of Human Language Technologies (https://www.iahlt.org)
Introduction
The coreference corpus is an extension of IAHLT's named entities dataset for Hebrew and Arabic. This project is a work in progress, such that a subset of articles (full-doc level) that are already annotated for entities are being further annotated for (named) entity coreference.
The corpus consists of 1 apc articles from Youtube transcripts (0%); 201. arb articles from the Kul al-Arab news organisation (96%), the All Rights. entitlements organisation (0%), Weizmann popular science articles (2%);. 657 heb articles from Bagatz court decisions (3%), Davar news organisation. (75%), Israel Hayom news organisation (3%), Knesset protocols (1%),. Weizmann popular science articles (4%), Hebrew Wikipedia entries (11%);.
The corpus, 1 paragraphs (apc), 2811 paragraphs (arb) and 9610 paragraphs (heb), has been annotated with morpheme-level mention spans, assembled into coreference clusters with entity types.
Data set
The current release includes the following files:
Annotated documents (.jsonl):
- data/coref-4-rc7-heb-all -- heb articles
- data/coref-4-rc7-heb-unique -- heb articles, each annotated once
- data/coref-4-rc7-heb-iaa -- heb articles, used for IAA
Additionally, all files are provided in a human-readable form (readable_data/*).
Format
Each article is a single json record. Some articles have been doubly-annotated for the purposes of inter-annotator agreement study, their articles appear multiple times.
The jsonl structure is:
{ text: str,
user: str,
metadata: { source: str, doc_id: str, ... },
clusters: [ {
metadata: { name: str, entity: str },
mentions: [ (int, int, dict) ]
} ]
}
The text
field contains the raw text of the original article. The top-level
metadata
dictionary provides document-level metadata, minimally source
and
doc_id
.
The clusters
field is a list of JSON cluster records each containing a
metadata
and mentions
field. The cluster-level metadata
field has a name
for the cluster and its entity type. The mentions
field is a list of triples:
the span indices of the text plus a metadata dictionary. We provide no
mention-level metadata in this release.
Not all clusters have been annotated for entity type; this will be completed in a future release.
Acknowledgments
We would like to thank all the people who contributed to this corpus:
Amir Cohen Amjad Aliat Emmanuel Kowner Israel Landau Mutaz Ayesh Nick Howell Noam Ordan Omer Strass Shahar Adar Shira Wigderson Yifat Ben Moshe amirejmail hiba_ammash
- Downloads last month
- 15