The dataset viewer is not available for this dataset.
Cannot get the config names for the dataset.
Error code:   ConfigNamesError
Exception:    ValueError
Message:      Each config must include `config_name` field with a string name of a config, but got citizen_nlu. 
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 79, in compute_config_names_response
                  config_names = get_dataset_config_names(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 347, in get_dataset_config_names
                  dataset_module = dataset_module_factory(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1910, in dataset_module_factory
                  raise e1 from None
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1885, in dataset_module_factory
                  return HubDatasetModuleFactoryWithoutScript(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1236, in get_module
                  metadata_configs = MetadataConfigs.from_dataset_card_data(dataset_card_data)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/metadata.py", line 227, in from_dataset_card_data
                  raise ValueError(
              ValueError: Each config must include `config_name` field with a string name of a config, but got citizen_nlu.

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

YAML Metadata Error: "configs[0]" must be of type object
YAML Metadata Warning: The task_categories "conversational" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, text2text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, other

Dataset Card for citizen_nlu

Dataset Summary

NeuralSpace strives to provide AutoNLP text and speech services, especially for low-resource languages. One of the major services provided by NeuralSpace on its platform is the “Language Understanding” service, where you can build, train and deploy your NLU model to recognize intents and entities with minimal code and just a few clicks.

The initiative of this challenge is created with the purpose of sparkling AI applications to address some of the pressing problems in India and find unique ways to address them. Starting with a focus on NLU, this challenge hopes to make progress towards multilingual modelling, as language diversity is significantly underserved on the web.

NeuralSpace aims at mastering the low-resource domain, and the citizen services use case is naturally a multilingual and essential domain for the general citizen.

Citizen services refer to the essential services provided by organizations to general citizens. In this case, we focus on important services like various FIR-based requests, Blood/Platelets Donation, and Coronavirus-related queries.

Such services may not be needed regularly by any particular city but when needed are of utmost importance, and in general, the needs for such services are prevalent every day.

Despite the importance of citizen services, linguistically rich countries like India are still far behind in delivering such essential needs to the citizens with absolute ease. The best services currently available do not exist in various low-resource languages that are native to different groups of people. This challenge aims to make government services more efficient, responsive, and customer-friendly.

As our computing resources and modelling capabilities grow, so does our potential to support our citizens by delivering a far superior customer experience. Equipping a Citizen services bot with the ability to converse in vernacular languages would make them accessible to a vast group of people for whom English is not a language of choice, but for who are increasingly turning to digital platforms and interfaces for a wide range of needs and wants.

Supported Tasks

A key component of any chatbot system is the NLU pipeline for ‘Intent Classification’ and ‘Named Entity Recognition. This primarily enables any chatbot to perform various tasks at ease. A fully functional multilingual chatbot needs to be able to decipher the language and understand exactly what the user wants.

citizen_nlu

A manually-curated multilingual dataset by Data Engineers at NeuralSpace for citizen services in 9 Indian languages for a realistic information-seeking task with data samples written by native-speaking expert data annotators here. The dataset files are available in CSV format.

Languages

The citizen_nlu data is available in nine Indian languages i.e, Bengali, Gujarati, Hindi, Kannada, Malayalam, Marathi, Punjabi, Tamil, and Telugu

Dataset Structure

Data Instances

  • Size of downloaded dataset files: 67.6 MB

An example of 'test' looks as follows.

मेरे पिता की कार उनके कार्यालय की पार्किंग से  कल  से गायब है। वाहन संख्या  केए-03-एचए-1985 । मैं एफआईआर कराना चाहता हूं।,ReportingMissingVehicle

An example of 'train' looks as follows.

என் தாத்தா எனக்கு பிறந்தநாள் பரிசு கொடுத்தார் மஞ்சள் நான் டாடனானோவை இழந்தேன். காணவில்லை என புகார் தெரிவிக்க விரும்புகிறேன்,ReportingMissingVehicle

Data Fields

The data fields are the same among all splits.

citizen_nlu

  • text: a string feature.
  • intent: a string feature.
  • type: a classification label, with possible values including train or test.

Data Splits

citizen_nlu

train test
citizen_nlu 287832 4752

Contributions

Mehar Bhatia (mehar@neuralspace.ai)

Downloads last month
107