image
imagewidth (px)
384
854
label
class label
18 classes
0train_val_call
0train_val_call
0train_val_call
0train_val_call
0train_val_call
0train_val_call
0train_val_call
0train_val_call
0train_val_call
0train_val_call
0train_val_call
0train_val_call
0train_val_call
0train_val_call
0train_val_call
0train_val_call
0train_val_call
0train_val_call
0train_val_call
0train_val_call
0train_val_call
0train_val_call
0train_val_call
0train_val_call
0train_val_call
0train_val_call
0train_val_call
0train_val_call
0train_val_call
0train_val_call
0train_val_call
0train_val_call
0train_val_call
0train_val_call
0train_val_call
0train_val_call
0train_val_call
0train_val_call
0train_val_call
0train_val_call
0train_val_call
0train_val_call
0train_val_call
0train_val_call
0train_val_call
0train_val_call
0train_val_call
0train_val_call
0train_val_call
0train_val_call
0train_val_call
0train_val_call
0train_val_call
0train_val_call
0train_val_call
0train_val_call
0train_val_call
0train_val_call
0train_val_call
0train_val_call
0train_val_call
0train_val_call
0train_val_call
0train_val_call
0train_val_call
0train_val_call
0train_val_call
0train_val_call
0train_val_call
0train_val_call
0train_val_call
0train_val_call
0train_val_call
0train_val_call
0train_val_call
0train_val_call
0train_val_call
0train_val_call
0train_val_call
0train_val_call
0train_val_call
0train_val_call
0train_val_call
0train_val_call
0train_val_call
0train_val_call
0train_val_call
0train_val_call
0train_val_call
0train_val_call
0train_val_call
0train_val_call
0train_val_call
0train_val_call
0train_val_call
0train_val_call
0train_val_call
0train_val_call
0train_val_call
0train_val_call

This dataset contains 31,833 images from HaGRID (HAnd Gesture Recognition Image Dataset) downscaled to 384p. The original dataset is 716GB and contains 552,992 1080p images. I created this sample for a tutorial so readers can use the dataset in the free tiers of Google Colab and Kaggle Notebooks.

Original Authors:

Original Dataset Links

Object Classes

['call',
 'no_gesture',
 'dislike',
 'fist',
 'four',
 'like',
 'mute',
 'ok',
 'one',
 'palm',
 'peace',
 'peace_inverted',
 'rock',
 'stop',
 'stop_inverted',
 'three',
 'three2',
 'two_up',
 'two_up_inverted']

Annotations

  • bboxes: [top-left-X-position, top-left-Y-position, width, height]
  • Multiply top-left-X-position and width values by the image width and multiply top-left-Y-position and height values by the image height.
    00005c9c-3548-4a8f-9d0b-2dd4aff37fc9
    bboxes [[0.23925175, 0.28595301, 0.25055143, 0.20777627]]
    labels [call]
    leading_hand right
    leading_conf 1
    user_id 5a389ffe1bed6660a59f4586c7d8fe2770785e5bf79b09334aa951f6f119c024
Downloads last month
8
Edit dataset card