batch_size
int64
8
8
best_epoch
int64
0
0
best_valid_loss
int64
0
0
beta1
float64
0.9
0.9
cached_states_path
stringclasses
2 values
collect_data_delta_move_max
float64
0.4
1
collect_data_delta_move_min
float64
0.15
1
copy_teach
sequencelengths
2
2
cuda_idx
int64
0
0
dataf
stringclasses
2 values
down_sample_scale
int64
3
3
dt
float64
0.01
0.01
edge_model_path
null
env_name
stringclasses
2 values
eval
int64
0
0
exp_name
stringclasses
1 value
fix_collision_edge
bool
1 class
fixed_lr
bool
1 class
full_dyn_path
null
full_lr
float64
0
0
gen_data
int64
0
0
gen_gif
int64
0
0
global_size
int64
128
128
imit_w
int64
5
5
imit_w_lat
int64
1
1
load_optim
bool
1 class
log_dir
stringclasses
2 values
lr
float64
0
0
n_epoch
int64
1k
1k
n_his
int64
5
5
n_rollout
int64
2k
2k
neighbor_radius
float64
0.05
0.05
nstep_eval_rollout
int64
20
20
num_variations
int64
100
100
num_workers
int64
10
10
partial_dyn_path
null
partial_observable
bool
1 class
particle_radius
float64
0.01
0.01
pred_time_interval
int64
5
5
proc_layer
int64
10
10
relation_dim
int64
7
7
reward_w
float64
100k
100k
save_model_interval
int64
5
5
seed
int64
100
100
shape_state_dim
int64
14
14
state_dim
int64
18
18
time_step
int64
100
100
train_mode
stringclasses
1 value
train_valid_ratio
float64
0.9
0.9
tune_teach
bool
1 class
use_collision_as_mesh_edge
bool
1 class
use_mesh_edge
bool
1 class
use_rest_distance
bool
1 class
use_wandb
bool
1 class
voxel_size
float64
0.02
0.02
vsbl_lr
float64
0
0
8
0
0
0.9
ours_clothfold_n100.pkl
0.4
0.15
[ "encoder", "decoder" ]
0
./data/ours_clothfold_vcd_11
3
0.01
null
ClothFold
0
test
false
false
null
0.0001
0
0
128
5
1
false
data/Clothfold_GNS_12.07.10.59_11
0.0001
1,000
5
2,000
0.045
20
100
10
null
true
0.00625
5
10
7
100,000
5
100
14
18
100
vsbl
0.9
false
false
true
true
false
0.0216
0.0001
8
null
null
0.9
ours_clothfold_n100.pkl
0.4
0.15
[ "encoder", "decoder" ]
0
./data/ours_clothfold_vcd_11
3
0.01
null
ClothFold
0
test
false
false
null
0.0001
0
0
128
5
1
false
data/Clothfold_GNS_12.07.10.59_11
0.0001
1,000
5
2,000
0.045
20
100
10
null
true
0.00625
5
10
7
100,000
5
100
14
18
100
vsbl
0.9
false
false
true
true
false
0.0216
0.0001
8
0
0
0.9
ours_drycloth_n100.pkl
1
1
[ "encoder", "decoder" ]
0
./data/drycloth_vcd_12.06.19.09_11
3
0.01
null
DryCloth
0
test
false
false
null
0.0001
0
0
128
5
1
false
data/DryCloth_GNS_12.11.09.57_11
0.0001
1,000
5
2,000
0.045
20
100
10
null
true
0.00625
5
10
7
100,000
5
100
14
18
100
vsbl
0.9
false
false
true
true
false
0.0216
0.0001

Learning Robot Manipulation from Cross-Morphology Demonstration (CoRL 2023)

[Project website] [arXiv PDF]

Datasets for MAIL.

Authors: Gautam Salhotra*, I-Chun Arthur Liu*, Gaurav S. Sukhatme (* denotes equal contribution)

Some Learning from Demonstrations (LfD) methods handle small mismatches in the action spaces of the teacher and student. Here we address the case where the teacher’s morphology is substantially different from that of the student. Our framework, Morphological Adaptation in Imitation Learning (MAIL), bridges this gap allowing us to train an agent from demonstrations by other agents with significantly different morphologies. MAIL learns from suboptimal demonstrations, so long as they provide some guidance towards a desired solution. We demonstrate MAIL on manipulation tasks with rigid and deformable objects including 3D cloth manipulation interacting with rigid obstacles. We train a visual control policy for a robot with one end-effector using demonstrations from a simulated agent with two end-effectors. MAIL shows up to 24% improvement in a normalized performance metric over LfD and non-LfD baselines. It is deployed to a real Franka Panda robot, handles multiple variations in properties for objects (size, rotation, translation), and cloth-specific properties (color, thickness, size, material).

Downloads last month
28
Edit dataset card