File size: 1,563 Bytes
333af84
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9c4f5a7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
---
license: apache-2.0
pretty_name: 1X World Model Challenge Dataset
size_categories:
- 10M<n<100M
viewer: false
---
Dataset for the [1X World Model Challenge](https://github.com/1x-technologies/1xgpt).

Download with:
```
huggingface-cli download 1x-technologies/worldmodel --repo-type dataset --local-dir data
```

Current version: v1.0

- **magvit2.ckpt** - weights for [Magvit2](https://github.com/TencentARC/Open-MAGVIT2) image tokenizer we used. We provide the encoder (tokenizer) and decoder (de-tokenizer) weights.

Contents of train/val_v1.0:
- **video.bin** - 16x16 image patches at 30hz, each patch is vector-quantized into 2^18 possible integer values. These can be decoded into 256x256 RGB images using the provided `magvig2.ckpt` weights.
- **segment_ids.bin** - for each frame `segment_ids[i]` uniquely points to the log index that frame `i` came from. You may want to use this to separate non-contiguous frames from different videos (transitions).
- **actions/** - a folder of action arrays stored in `np.float32` format. For frame `i`, the corresponding action is given by `driving_command[i]`, `joint_pos[i]`, `l_hand_closure[i]`, and so on. The shapes of the arrays are as follows (N is the number of frames):
  ```
  {
    joint_pos: (N, 21)
    driving_command: (N, 2), 
    neck_desired: (N, 1), 
    l_hand_closure: (N, 1), 
    r_hand_closure: (N, 1),
  }
  ```

  

We also provide a small `val_v1.0` data split containing held-out examples not seen in the training set, in case you want to try evaluating your model on held-out frames.