# In-Context Imitation Learning via Next-Token Prediction
by Max (Letian) Fu*, Huang Huang*, Gaurav Datta*, Lawrence Yunliang Chen, William Chung-Ho Panitch, Fangchen Liu, Hui Li, and Ken Goldberg at UC Berkeley and Autodesk (*equal contribution).
[[Paper](https://openreview.net/forum?id=tFEOOH9eH0)] | [[Project Page](https://github.com/Max-Fu/icrt)] | [[Checkpoints](https://huggingface.co/mlfu7/Touch-Vision-Language-Models)] | [[Dataset](https://huggingface.co/datasets/mlfu7/Touch-Vision-Language-Dataset)] | [[Citation](#citation)]
This repo contains the checkpoints for *In-Context Imitation Learning via Next-Token Prediction*. We investigate how to bring few-shot, in-context learning capability that exists in next-token prediction models (i.e. GPT) into real-robot imitation learning policies.
In particular, we store the pre-trained vision encoder and ICRT model separately. Please find them in [encoder](crossmae_rtx/cross-mae-rtx-vitb.pth) and [ICRT](icrt_vitb_droid_pretrained/icrt_vitb_droid_pretrained.pth) separately.
Please refer to the [project page](https://github.com/Max-Fu/icrt) on installing the repo, training and inferencing the model.