Gormery Kombo Wanjiru
commited on
Commit
β’
17edc76
1
Parent(s):
a5348f1
docs
Browse files
README.md
CHANGED
@@ -1 +1,132 @@
|
|
1 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Keypoints-RWF-total Dataset
|
2 |
+
|
3 |
+
This dataset is a **fusion** of three distinct datasets:
|
4 |
+
|
5 |
+
1. **RWF-2000**: A dataset that includes videos of real-world fights and non-fight scenarios.
|
6 |
+
2. **Hockey Violence Dataset**: A dataset focused on violent interactions in hockey games.
|
7 |
+
3. **Airtlab Violence Dataset**: A dataset containing videos of violent incidents in various settings.
|
8 |
+
|
9 |
+
**Important Note**: We do not own these datasets. By using this dataset, you are agreeing to follow the rules and licensing agreements of the original datasets. Please ensure you review the terms of use for the individual datasets: [RWF-2000 Terms](#), [Hockey Violence Dataset Terms](#), and [Airtlab Violence Dataset Terms](#).
|
10 |
+
|
11 |
+
## Dataset Structure
|
12 |
+
|
13 |
+
The dataset structure is as follows:
|
14 |
+
|
15 |
+
```
|
16 |
+
../keypoints-rwf-2000/
|
17 |
+
βββ train/
|
18 |
+
β βββ Fight/
|
19 |
+
β β βββ _2RYnSFPD_U_0/
|
20 |
+
β β β βββ _2RYnSFPD_U_0.avi # Original video
|
21 |
+
β β β βββ _2RYnSFPD_U_0_processed.avi # Processed video
|
22 |
+
β β β βββ _2RYnSFPD_U_0.json # Keypoint or metadata JSON
|
23 |
+
β β βββ [more video folders...]
|
24 |
+
β βββ NonFight/
|
25 |
+
β βββ [more video folders...]
|
26 |
+
βββ val/
|
27 |
+
β βββ Fight/
|
28 |
+
β β βββ _2RYnSFPD_U_0/
|
29 |
+
β β β βββ _2RYnSFPD_U_0.avi # Original video
|
30 |
+
β β β βββ _2RYnSFPD_U_0_processed.avi # Processed video
|
31 |
+
β β β βββ _2RYnSFPD_U_0.json # Keypoint or metadata JSON
|
32 |
+
β β βββ [more video folders...]
|
33 |
+
β βββ NonFight/
|
34 |
+
β βββ [more video folders...]
|
35 |
+
βββ [other files...]
|
36 |
+
```
|
37 |
+
|
38 |
+
### Files Explanation:
|
39 |
+
- **.avi files**: The original video (`.avi`) and processed video (`_processed.avi`) files.
|
40 |
+
- the processed video is just the normal video with visualization of the keypoints and detections
|
41 |
+
- **.json files**: Each video has a corresponding JSON file that contains keypoints data.
|
42 |
+
|
43 |
+
### JSON Structure
|
44 |
+
|
45 |
+
The JSON files are structured as a list of frames, where each frame contains detections for multiple people. Each person in the frame has keypoints with their coordinates and associated confidence scores. Here's an example of the JSON structure:
|
46 |
+
|
47 |
+
```json
|
48 |
+
[
|
49 |
+
{
|
50 |
+
"frame": 0,
|
51 |
+
"detections": [
|
52 |
+
{
|
53 |
+
"person_id": 1,
|
54 |
+
"confidence": 0.6331493258476257,
|
55 |
+
"box": {
|
56 |
+
"x1": 67.0,
|
57 |
+
"y1": 7.0,
|
58 |
+
"x2": 173.0,
|
59 |
+
"y2": 305.0
|
60 |
+
},
|
61 |
+
"keypoints": [
|
62 |
+
{
|
63 |
+
"label": "nose",
|
64 |
+
"coordinates": {
|
65 |
+
"x": 0.0,
|
66 |
+
"y": 0.0
|
67 |
+
},
|
68 |
+
"confidence": 0.30005577206611633
|
69 |
+
},
|
70 |
+
{
|
71 |
+
"label": "left_eye",
|
72 |
+
"coordinates": {
|
73 |
+
"x": 0.0,
|
74 |
+
"y": 0.0
|
75 |
+
},
|
76 |
+
"confidence": 0.06836472451686859
|
77 |
+
},
|
78 |
+
{
|
79 |
+
"label": "right_eye",
|
80 |
+
"coordinates": {
|
81 |
+
"x": 0.0,
|
82 |
+
"y": 0.0
|
83 |
+
},
|
84 |
+
"confidence": 0.3499656617641449
|
85 |
+
},
|
86 |
+
...
|
87 |
+
]
|
88 |
+
},
|
89 |
+
{
|
90 |
+
"person_id": 2,
|
91 |
+
"confidence": 0.615151047706604,
|
92 |
+
"box": {
|
93 |
+
"x1": 388.0,
|
94 |
+
"y1": 22.0,
|
95 |
+
"x2": 499.0,
|
96 |
+
"y2": 230.0
|
97 |
+
},
|
98 |
+
"keypoints": [
|
99 |
+
{
|
100 |
+
"label": "nose",
|
101 |
+
"coordinates": {
|
102 |
+
"x": 445.0414123535156,
|
103 |
+
"y": 62.61204528808594
|
104 |
+
},
|
105 |
+
"confidence": 0.9617932438850403
|
106 |
+
},
|
107 |
+
...
|
108 |
+
]
|
109 |
+
}
|
110 |
+
]
|
111 |
+
},
|
112 |
+
{
|
113 |
+
"frame": 1,
|
114 |
+
"detections": [
|
115 |
+
...
|
116 |
+
]
|
117 |
+
},
|
118 |
+
...
|
119 |
+
]
|
120 |
+
```
|
121 |
+
|
122 |
+
### Keypoint Details:
|
123 |
+
Each person detected in the frame has several keypoints like "nose", "left_eye", "right_eye", etc., with the following data:
|
124 |
+
- `label`: The name of the keypoint (e.g., "nose", "left_shoulder").
|
125 |
+
- `coordinates`: The `(x, y)` coordinates of the keypoint.
|
126 |
+
- `confidence`: The confidence score indicating the accuracy of the detection.
|
127 |
+
|
128 |
+
## Usage
|
129 |
+
|
130 |
+
You can use the dataset for training and validation purposes. To load and process the data, refer to the `load_script.py` in the repository, which provides the functionality to load and preprocess the videos and their associated keypoints.
|
131 |
+
|
132 |
+
Ensure that you are following the respective dataset licenses and terms of use when using this dataset for your research or projects.
|