vovantuan commited on
Commit
3a70ca2
·
verified ·
1 Parent(s): a5081d4

update action task

Browse files
Files changed (1) hide show
  1. README.md +23 -9
README.md CHANGED
@@ -20,15 +20,29 @@ The CathAction dataset encompasses annotated frames for catheterization action u
20
 
21
  These are five classes: *advance catheter*, *retract catheter*, *advance guidewire*, *retract guidewire*, and *rotate*.
22
 
23
- ### Annotation Files Structure
24
- The groundtruth CSV file containing 4 columns:
25
-
26
- | Column Name | Type | Example | Description |
27
- | ------------------- | ---------------------------- | ---------------- | --------------------------------------------------------------------------------------------------------------------- |
28
- | `video_id` | string | `video_1` | Video the segment is in |
29
- | `start_frame` | int | `430` | Start frame of the action |
30
- | `stop_frame` | int | `643` | End frame of the action |
31
- | `all_action_classes` | list of int (1 or more) | `[1]` | List of numeric IDs corresponding to all of the parsed Action' classes. |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
32
 
33
  ## 2. Collision Detection
34
  The CathAction dataset is designed for the collision detection task, which involves identifying whether the tip of the catheter or guidewire comes into contact with the blood vessel wall.
 
20
 
21
  These are five classes: *advance catheter*, *retract catheter*, *advance guidewire*, *retract guidewire*, and *rotate*.
22
 
23
+ The dataset is organized into the following folders and files:
24
+
25
+ - **video_frames/**: Contains extracted video frames for each video.
26
+ - **feature_extractions/**: Contains pre-extracted RGB features, extracted using [this code](https://github.com/yjxiong/tsn-pytorch).
27
+ - **training.csv**: Groundtruth CSV file for training data.
28
+ - **validation.csv**: Groundtruth CSV file for validation data.
29
+
30
+ ### Annotation File Structure
31
+
32
+ The annotation files (`training.csv` and `validation.csv`) contain four columns, with the following structure:
33
+
34
+ | Column Name | Type | Example | Description |
35
+ |---------------------|------------------|--------------|-------------------------------------------------------------------------------------------------|
36
+ | `video_id` | string | `video_1` | ID of the video where the action segment is located. |
37
+ | `start_frame` | int | `430` | Start frame of the action. |
38
+ | `stop_frame` | int | `643` | End frame of the action. |
39
+ | `all_action_classes`| list of int(s) | `[1]` | List of numeric IDs for all detected action classes in the segment. |
40
+
41
+ The frames and pre-extracted RGB features are located in the `video_frames` and `feature_extractions` folders, respectively, and can be generated using [this code](https://github.com/yjxiong/tsn-pytorch).
42
+
43
+ ### Usage
44
+
45
+ 1. **Catheterization Action Recognition and Anticipation Models**: Use the `start_frame` and `stop_frame` values, along with the ground truth `all_action_classes` in the CSV file, to train models that recognize action segments and anticipate future catheter actions.
46
 
47
  ## 2. Collision Detection
48
  The CathAction dataset is designed for the collision detection task, which involves identifying whether the tip of the catheter or guidewire comes into contact with the blood vessel wall.