# Keypoints-RWF-total Dataset This dataset is a **fusion** of three distinct datasets: 1. **RWF-2000**: A dataset that includes videos of real-world fights and non-fight scenarios. 2. **Hockey Violence Dataset**: A dataset focused on violent interactions in hockey games. 3. **Airtlab Violence Dataset**: A dataset containing videos of violent incidents in various settings. **Important Note**: We do not own these datasets. By using this dataset, you are agreeing to follow the rules and licensing agreements of the original datasets. Please ensure you review the terms of use for the individual datasets: [RWF-2000 Terms](#), [Hockey Violence Dataset Terms](#), and [Airtlab Violence Dataset Terms](#). ## Dataset Structure The dataset structure is as follows: ``` ../keypoints-rwf-2000/ ├── train/ │ ├── Fight/ │ │ ├── _2RYnSFPD_U_0/ │ │ │ ├── _2RYnSFPD_U_0.avi # Original video │ │ │ ├── _2RYnSFPD_U_0_processed.avi # Processed video │ │ │ └── _2RYnSFPD_U_0.json # Keypoint or metadata JSON │ │ └── [more video folders...] │ └── NonFight/ │ └── [more video folders...] ├── val/ │ ├── Fight/ │ │ ├── _2RYnSFPD_U_0/ │ │ │ ├── _2RYnSFPD_U_0.avi # Original video │ │ │ ├── _2RYnSFPD_U_0_processed.avi # Processed video │ │ │ └── _2RYnSFPD_U_0.json # Keypoint or metadata JSON │ │ └── [more video folders...] │ └── NonFight/ │ └── [more video folders...] └── [other files...] ``` ### Files Explanation: - **.avi files**: The original video (`.avi`) and processed video (`_processed.avi`) files. - the processed video is just the normal video with visualization of the keypoints and detections - **.json files**: Each video has a corresponding JSON file that contains keypoints data. ### JSON Structure The JSON files are structured as a list of frames, where each frame contains detections for multiple people. Each person in the frame has keypoints with their coordinates and associated confidence scores. Here's an example of the JSON structure: ```json [ { "frame": 0, "detections": [ { "person_id": 1, "confidence": 0.6331493258476257, "box": { "x1": 67.0, "y1": 7.0, "x2": 173.0, "y2": 305.0 }, "keypoints": [ { "label": "nose", "coordinates": { "x": 0.0, "y": 0.0 }, "confidence": 0.30005577206611633 }, { "label": "left_eye", "coordinates": { "x": 0.0, "y": 0.0 }, "confidence": 0.06836472451686859 }, { "label": "right_eye", "coordinates": { "x": 0.0, "y": 0.0 }, "confidence": 0.3499656617641449 }, ... ] }, { "person_id": 2, "confidence": 0.615151047706604, "box": { "x1": 388.0, "y1": 22.0, "x2": 499.0, "y2": 230.0 }, "keypoints": [ { "label": "nose", "coordinates": { "x": 445.0414123535156, "y": 62.61204528808594 }, "confidence": 0.9617932438850403 }, ... ] } ] }, { "frame": 1, "detections": [ ... ] }, ... ] ``` ### Keypoint Details: Each person detected in the frame has several keypoints like "nose", "left_eye", "right_eye", etc., with the following data: - `label`: The name of the keypoint (e.g., "nose", "left_shoulder"). - `coordinates`: The `(x, y)` coordinates of the keypoint. - `confidence`: The confidence score indicating the accuracy of the detection. ## Usage You can use the dataset for training and validation purposes. To load and process the data, refer to the `load_script.py` in the repository, which provides the functionality to load and preprocess the videos and their associated keypoints. Ensure that you are following the respective dataset licenses and terms of use when using this dataset for your research or projects.