Dataset Viewer
Full Screen
The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'train' of the config 'default' of the dataset.
Error code:   FeaturesError
Exception:    UnicodeDecodeError
Message:      'utf-8' codec can't decode byte 0xa0 in position 356: invalid start byte
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 233, in compute_first_rows_from_streaming_response
                  iterable_dataset = iterable_dataset._resolve_features()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2998, in _resolve_features
                  features = _infer_features_from_batch(self.with_format(None)._head())
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1918, in _head
                  return _examples_to_batch(list(self.take(n)))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2093, in __iter__
                  for key, example in ex_iterable:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1576, in __iter__
                  for key_example in islice(self.ex_iterable, self.n - ex_iterable_num_taken):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 279, in __iter__
                  for key, pa_table in self.generate_tables_fn(**gen_kwags):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/csv/csv.py", line 188, in _generate_tables
                  csv_file_reader = pd.read_csv(file, iterator=True, dtype=dtype, **self.config.pd_read_csv_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/streaming.py", line 75, in wrapper
                  return function(*args, download_config=download_config, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 1212, in xpandas_read_csv
                  return pd.read_csv(xopen(filepath_or_buffer, "rb", download_config=download_config), **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 1026, in read_csv
                  return _read(filepath_or_buffer, kwds)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 620, in _read
                  parser = TextFileReader(filepath_or_buffer, **kwds)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 1620, in __init__
                  self._engine = self._make_engine(f, self.engine)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 1898, in _make_engine
                  return mapping[engine](f, **self.options)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/c_parser_wrapper.py", line 93, in __init__
                  self._reader = parsers.TextReader(src, **kwds)
                File "parsers.pyx", line 574, in pandas._libs.parsers.TextReader.__cinit__
                File "parsers.pyx", line 663, in pandas._libs.parsers.TextReader._get_header
                File "parsers.pyx", line 874, in pandas._libs.parsers.TextReader._tokenize_rows
                File "parsers.pyx", line 891, in pandas._libs.parsers.TextReader._check_tokenize_status
                File "parsers.pyx", line 2053, in pandas._libs.parsers.raise_parser_error
              UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa0 in position 356: invalid start byte

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

model_details:

description: A comprehensive database and analysis tool for cyber exploits, vulnerabilities, and related information. This tool provides a rich dataset for security researchers to analyze and mitigate security risks.

task_categories:

  • data_analysis

structure:

  • data/
    • exploits.csv
    • vulnerabilities.csv
  • assets/
    • favicon.svg
  • .streamlit/
    • config.toml
  • main.py
  • data_processor.py
  • visualizations.py
  • README.md

intended_use: Designed for security researchers, developers, and educators to analyze and understand cybersecurity exploits and vulnerabilities.

use_cases:

  • Trend analysis for cybersecurity research
  • Educational tool for cybersecurity training
  • Security protocol development and testing

how_to_use:

Dependencies Section with Requirements File

  • Clone the repository
  • Install dependencies using pip install -r requirements.txt
  • Run the Streamlit application

key_features:

  • Comprehensive coverage of known exploits and vulnerabilities
  • Interactive Streamlit-based interface for data visualization
  • Community-driven updates and contributions

performance_metrics:

  • Detection rate: 90% for known exploits
  • False positive rate: 2%

ethical_considerations:

  • Ensure responsible usage of the data for security enhancement only
  • Follow legal and ethical frameworks when handling sensitive data

limitations_and_biases:

  • Limited dataset size
  • Potential bias towards known, well-documented exploits

citation: Canstralian, CyberExploitDB, Hugging Face, https://huggingface.co/datasets/Canstralian/CyberExploitDB

contributing: Contributions are encouraged. Refer to the CONTRIBUTING.md file for guidelines.

authors: Canstralian

acknowledgments: Thanks to contributors and supporters for their assistance.

Downloads last month
52

Models trained or fine-tuned on Canstralian/CyberExploitDB