mlfu7 commited on
Commit
117e02f
1 Parent(s): c8ef7a8

Upload folder using huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -1,10 +1,10 @@
1
  # In-Context Imitation Learning via Next-Token Prediction
2
  by <a href="https://max-fu.github.io">Max (Letian) Fu*</a>, <a href="https://qingh097.github.io/">Huang Huang*</a>, <a href="https://www.linkedin.com/in/gaurav-datta/">Gaurav Datta*</a>, <a href="https://yunliangchen.github.io/">Lawrence Yunliang Chen</a>, <a href="https://autolab.berkeley.edu/people">William Chung-Ho Panitch</a>, <a href="https://fangchenliu.github.io/">Fangchen Liu</a>, <a href="https://www.research.autodesk.com/people/hui-li/">Hui Li</a>, and <a href="https://goldberg.berkeley.edu">Ken Goldberg</a> at UC Berkeley and Autodesk (*equal contribution).
3
 
4
- [[Paper](https://openreview.net/forum?id=tFEOOH9eH0)] | [[Project Page](https://github.com/Max-Fu/icrt)] | [[Checkpoints](https://huggingface.co/mlfu7/Touch-Vision-Language-Models)] | [[Dataset](https://huggingface.co/datasets/mlfu7/Touch-Vision-Language-Dataset)] | [[Citation](#citation)]
5
 
6
  This repo contains the checkpoints for *In-Context Imitation Learning via Next-Token Prediction*. We investigate how to bring few-shot, in-context learning capability that exists in next-token prediction models (i.e. GPT) into real-robot imitation learning policies.
7
 
8
  In particular, we store the pre-trained vision encoder and ICRT model separately. Please find them in [encoder](crossmae_rtx/cross-mae-rtx-vitb.pth), [ICRT](icrt_vitb_droid_pretrained/icrt_vitb_droid_pretrained.pth), and [ICRT-Llama7B](icrt_llama7b_lora/icrt_llama7b_lora.pth).
9
 
10
- Please refer to the [project page](https://github.com/Max-Fu/icrt) on installing the repo, training and inferencing the model.
 
1
  # In-Context Imitation Learning via Next-Token Prediction
2
  by <a href="https://max-fu.github.io">Max (Letian) Fu*</a>, <a href="https://qingh097.github.io/">Huang Huang*</a>, <a href="https://www.linkedin.com/in/gaurav-datta/">Gaurav Datta*</a>, <a href="https://yunliangchen.github.io/">Lawrence Yunliang Chen</a>, <a href="https://autolab.berkeley.edu/people">William Chung-Ho Panitch</a>, <a href="https://fangchenliu.github.io/">Fangchen Liu</a>, <a href="https://www.research.autodesk.com/people/hui-li/">Hui Li</a>, and <a href="https://goldberg.berkeley.edu">Ken Goldberg</a> at UC Berkeley and Autodesk (*equal contribution).
3
 
4
+ [[Paper]()] | [[Project Page](https://in-context-robot-transformer.github.io/)] | [[Checkpoints](https://huggingface.co/mlfu7/ICRT)] | [[Dataset](https://huggingface.co/datasets/Ravenh97/ICRT-MT)]
5
 
6
  This repo contains the checkpoints for *In-Context Imitation Learning via Next-Token Prediction*. We investigate how to bring few-shot, in-context learning capability that exists in next-token prediction models (i.e. GPT) into real-robot imitation learning policies.
7
 
8
  In particular, we store the pre-trained vision encoder and ICRT model separately. Please find them in [encoder](crossmae_rtx/cross-mae-rtx-vitb.pth), [ICRT](icrt_vitb_droid_pretrained/icrt_vitb_droid_pretrained.pth), and [ICRT-Llama7B](icrt_llama7b_lora/icrt_llama7b_lora.pth).
9
 
10
+ Please refer to the [project page](https://github.com/Max-Fu/icrt) on installing the repo, training and inferencing the model.