The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'validation' of the config 'default' of the dataset.
Error code:   FeaturesError
Exception:    ValueError
Message:      Not able to read records in the JSON file at hf://datasets/bboldt/elcc@32c7bc2f8668f2147c92f3b2c9ba6cfb10478cd6/systems/egg-discrimination/data/4-attr_4-val_3-dist_0-seed/corpus.json.
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 240, in compute_first_rows_from_streaming_response
                  iterable_dataset = iterable_dataset._resolve_features()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2216, in _resolve_features
                  features = _infer_features_from_batch(self.with_format(None)._head())
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1239, in _head
                  return _examples_to_batch(list(self.take(n)))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1389, in __iter__
                  for key, example in ex_iterable:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1044, in __iter__
                  yield from islice(self.ex_iterable, self.n)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 282, in __iter__
                  for key, pa_table in self.generate_tables_fn(**self.kwargs):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 165, in _generate_tables
                  raise ValueError(f"Not able to read records in the JSON file at {file}.") from None
              ValueError: Not able to read records in the JSON file at hf://datasets/bboldt/elcc@32c7bc2f8668f2147c92f3b2c9ba6cfb10478cd6/systems/egg-discrimination/data/4-attr_4-val_3-dist_0-seed/corpus.json.

Need help to make the dataset viewer work? Open a discussion for direct support.

ELCC

The Emergent Language Corpus Collection is collection of corpora and metadata from a variety of emergent communication simulations.

Using ELCC

You can clone this repository with git LFS and use the data directly or load the data via the mlcroissant library. To install the mlcroissant library and necessary dependencies, see the conda environment at util/environment.yml. Below we show an example of loading ELCC's data via mlcroissant.

import mlcroissant as mlc

cr_url = "https://huggingface.co/datasets/bboldt/elcc/raw/main/croissant.json"
dataset = mlc.Dataset(jsonld=cr_url)

# A raw corpus of integer arrays; the corpora are named based on their paths;
# e..g., "systems/babyai-sr/data/GoToObj/corpus.json" becomes
# "babyai-sr/GoToObj".
records = dataset.records(record_set="babyai-sr/GoToObj")
# System-level metadata
records = dataset.records(record_set="system-metadata")
# Raw JSON string for system metadata; some fields aren't handled well by
# Croissant, so you can access them here if need be.
records = dataset.records(record_set="system-metadata-raw")
# Corpus metadata, specifically metrics generated by ELCC's analyses
records = dataset.records(record_set="corpus-metadata")
# Raw corpus metadata
records = dataset.records(record_set="corpus-metadata-raw")

# `records` can now be iterated through to access the individual elements.

Developing

Running individual EC systems

For each emergent language entry, we provide wrapper code (in systems/*/code/) to create a reproducible environment and run the emergent language-generating code. Environments are specified precisely in the environment.yml file; if you wish to edit the dependencies manually, it may be easier to start with environment.editable.yml instead, if it exists. Next, either run or look at run.sh or run.py to see the commands necessary to produce to the corpora.

Git submodules

This project uses git submodules to manage external dependencies. Submodules do not always operate in an intuitive way, so we provide a brief explanation of how to use them here. By default, submodules are not "init-ed" which means that they will be empty after you clone the project. If you would like to populate a submodule (i.e., the directory pointing to another repo) to see or use its code, run git submodule init path/to/submodule to mark it as init-ed. Second, run git submodule update to populated init-ed submodules. Run git submodule deinit -f path/to/submodule to make the submodule empty again.

Downloads last month
3