YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
In-Context Imitation Learning via Next-Token Prediction
by Max (Letian) Fu*, Huang Huang*, Gaurav Datta*, Lawrence Yunliang Chen, William Chung-Ho Panitch, Fangchen Liu, Hui Li, and Ken Goldberg at UC Berkeley and Autodesk (*equal contribution).
[Paper] | [Project Page] | [Checkpoints] | [Dataset]
This repo contains the checkpoints for In-Context Imitation Learning via Next-Token Prediction. We investigate how to bring few-shot, in-context learning capability that exists in next-token prediction models (i.e. GPT) into real-robot imitation learning policies.
In particular, we store the pre-trained vision encoder and ICRT model separately. Please find them in encoder, ICRT, and ICRT-Llama7B.
Please refer to the project page on installing the repo, training and inferencing the model.