The dataset is currently empty. Upload or create new data files. Then, you will be able to explore them in the Dataset Viewer.

YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/datasets-cards)

Below is a sample README you can adapt for your Hugging Face repository. Feel free to modify the text or structure to suit your needs!


Wikidata 2018-12-17 JSON Dump

This repository hosts a 2018 snapshot of the Wikidata JSON dump. The dataset was originally found on Zenodo (Record #4436356).

License

Wikidata’s data is published under the Creative Commons CC0 1.0 Universal Public Domain Dedication (CC0). You can use this dataset freely for any purpose without copyright restriction. However, attribution to Wikidata is strongly encouraged as a best practice.

Important: Some associated media, such as images referenced within Wikidata items, may be under different licenses. The JSON data itself is CC0.

How to Cite

If you use this dataset in your work, please cite:

  • Wikidata:
    Wikidata contributors. (2018). Wikidata (CC0 1.0 Universal). 
    Retrieved from https://www.wikidata.org/
    
  • Original Zenodo Record (optional):
    Wikidata JSON dumps. Zenodo. 
    https://zenodo.org/record/4436356
    

How to Use

This dump is ready to use. It’s stored as a gzipped JSON array where each array element is a single Wikidata entity.

Example: Python Code to Stream the JSON

Below is a sample script showing how to read the dump without fully decompressing it on disk. This uses the ijson library for iterative JSON parsing.

import gzip
import ijson

def stream_wikidata_array(gz_file_path):
    """
    Streams each element from a top-level array in the gzipped JSON.
    Yields Python dicts (or lists), one for each array element.
    """
    with gzip.open(gz_file_path, 'rb') as f:
        # 'item' means "each element of the array"
        for element in ijson.items(f, 'item'):
            yield element

if __name__ == "__main__":
    # Replace with the path to your Wikidata dump
    wikidata_path = r"E:\wikidata\20181217.json.gz"
    
    # Just print the first few records
    max_to_print = 5
    for i, record in enumerate(stream_wikidata_array(wikidata_path), start=1):
        print(f"Record #{i}:")
        print(record)
        
        if i >= max_to_print:
            print("...stopping here.")
            break

You can adapt this approach to load the data into your own workflow, whether that’s local analysis, a database import, or a big data pipeline.

Disclaimer

  • This snapshot is from 2018 and will not be up-to-date with the current Wikidata database.
  • This repository and uploader are not affiliated with the Wikimedia Foundation or the official Wikidata project beyond using their data.
  • Please ensure you comply with any relevant data protection or privacy regulations when using this dataset in production.

Thank you for your interest in Wikidata and open knowledge!


license: cc-by-4.0

Downloads last month
0